Test Report: Docker_Linux_containerd_arm64 21643

                    
                      cc42fd2f8cec8fa883ff6f7397a2f6141c487062:2025-10-02:41725
                    
                

Test fail (12/332)

x
+
TestAddons/serial/Volcano (719.46s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 38.679214ms
addons_test.go:876: volcano-admission stabilized in 39.695337ms
addons_test.go:868: volcano-scheduler stabilized in 40.133477ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-jt89z" [14e70d56-3e47-4d75-91b5-fda07d412971] Pending / Ready:ContainersNotReady (containers with unready status: [volcano-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [volcano-scheduler])
helpers_test.go:352: "volcano-scheduler-76c996c8bf-jt89z" [14e70d56-3e47-4d75-91b5-fda07d412971] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5m44.00431391s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-vbfkj" [16519b81-022a-4e88-828f-109cc81af16b] Pending / Ready:ContainersNotReady (containers with unready status: [admission]) / ContainersReady:ContainersNotReady (containers with unready status: [admission])
helpers_test.go:337: TestAddons/serial/Volcano: WARNING: pod list for "volcano-system" "app=volcano-admission" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:894: ***** TestAddons/serial/Volcano: pod "app=volcano-admission" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:894: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-110926 -n addons-110926
addons_test.go:894: TestAddons/serial/Volcano: showing logs for failed pods as of 2025-10-02 06:55:09.306790696 +0000 UTC m=+1140.194138025
addons_test.go:894: (dbg) Run:  kubectl --context addons-110926 describe po volcano-admission-6c447bd768-vbfkj -n volcano-system
addons_test.go:894: (dbg) kubectl --context addons-110926 describe po volcano-admission-6c447bd768-vbfkj -n volcano-system:
Name:                 volcano-admission-6c447bd768-vbfkj
Namespace:            volcano-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      volcano-admission
Node:                 addons-110926/192.168.49.2
Start Time:           Thu, 02 Oct 2025 06:37:56 +0000
Labels:               app=volcano-admission
pod-template-hash=6c447bd768
Annotations:          rollme/helm-revision: 1
Status:               Pending
SeccompProfile:       RuntimeDefault
IP:                   
IPs:                  <none>
Controlled By:        ReplicaSet/volcano-admission-6c447bd768
Containers:
admission:
Container ID:  
Image:         docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001
Image ID:      
Port:          <none>
Host Port:     <none>
Args:
--enabled-admission=/jobs/mutate,/jobs/validate,/podgroups/validate,/queues/mutate,/queues/validate,/hypernodes/validate,/cronjobs/validate
--tls-cert-file=/admission.local.config/certificates/tls.crt
--tls-private-key-file=/admission.local.config/certificates/tls.key
--ca-cert-file=/admission.local.config/certificates/ca.crt
--admission-conf=/admission.local.config/configmap/volcano-admission.conf
--webhook-namespace=volcano-system
--webhook-service-name=volcano-admission-service
--enable-healthz=true
--logtostderr
--port=8443
-v=4
2>&1
State:          Waiting
Reason:       ContainerCreating
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/admission.local.config/certificates from admission-certs (ro)
/admission.local.config/configmap from admission-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fzzjd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   False 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
admission-certs:
Type:        Secret (a volume populated by a Secret)
SecretName:  volcano-admission-secret
Optional:    false
admission-config:
Type:      ConfigMap (a volume populated by a ConfigMap)
Name:      volcano-admission-configmap
Optional:  false
kube-api-access-fzzjd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  17m                 default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         17m                 default-scheduler  Successfully assigned volcano-system/volcano-admission-6c447bd768-vbfkj to addons-110926
Warning  FailedMount       50s (x16 over 17m)  kubelet            MountVolume.SetUp failed for volume "admission-certs" : secret "volcano-admission-secret" not found
addons_test.go:894: (dbg) Run:  kubectl --context addons-110926 logs volcano-admission-6c447bd768-vbfkj -n volcano-system
addons_test.go:894: (dbg) Non-zero exit: kubectl --context addons-110926 logs volcano-admission-6c447bd768-vbfkj -n volcano-system: exit status 1 (128.737836ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "admission" in pod "volcano-admission-6c447bd768-vbfkj" is waiting to start: ContainerCreating

                                                
                                                
** /stderr **
addons_test.go:894: kubectl --context addons-110926 logs volcano-admission-6c447bd768-vbfkj -n volcano-system: exit status 1
addons_test.go:895: failed waiting for app=volcano-admission pod: app=volcano-admission within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/serial/Volcano]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-110926
helpers_test.go:243: (dbg) docker inspect addons-110926:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	        "Created": "2025-10-02T06:36:47.077600034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 814321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:36:47.138474038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hosts",
	        "LogPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d-json.log",
	        "Name": "/addons-110926",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-110926:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-110926",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	                "LowerDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-110926",
	                "Source": "/var/lib/docker/volumes/addons-110926/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-110926",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-110926",
	                "name.minikube.sigs.k8s.io": "addons-110926",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e03dfd9e44981225a70f6640c6b12a48805938cfdd54b566df7bddffa824b2d",
	            "SandboxKey": "/var/run/docker/netns/6e03dfd9e449",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-110926": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:3c:a1:2d:84:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c2d471fc3c60a7f5a83ca737cf0a22c0c0076227d91a7e348867826280521af7",
	                    "EndpointID": "885b90e051ad80837eb5c6d3c161821bbf8a3c111f24b170e0bc233d0690c448",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-110926",
	                        "e88a06110ea1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-110926 -n addons-110926
helpers_test.go:252: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 logs -n 25: (1.470659652s)
helpers_test.go:260: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-492765 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-492765                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ -o=json --download-only -p download-only-547243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-547243                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-492765                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-547243                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p download-docker-533728 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p download-docker-533728                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p binary-mirror-704812 --alsologtostderr --binary-mirror http://127.0.0.1:37961 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p binary-mirror-704812                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ addons  │ enable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ start   │ -p addons-110926 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:43 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:36:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:36:21.580334  813918 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:36:21.580482  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580492  813918 out.go:374] Setting ErrFile to fd 2...
	I1002 06:36:21.580497  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580834  813918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 06:36:21.581311  813918 out.go:368] Setting JSON to false
	I1002 06:36:21.582265  813918 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22731,"bootTime":1759364251,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 06:36:21.582336  813918 start.go:140] virtualization:  
	I1002 06:36:21.585831  813918 out.go:179] * [addons-110926] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:36:21.589067  813918 notify.go:220] Checking for updates...
	I1002 06:36:21.589658  813918 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:36:21.592579  813918 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:36:21.595634  813918 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:36:21.598400  813918 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 06:36:21.601243  813918 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:36:21.604214  813918 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:36:21.607495  813918 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:36:21.629855  813918 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:36:21.629989  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.693096  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.683464105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.693212  813918 docker.go:318] overlay module found
	I1002 06:36:21.698158  813918 out.go:179] * Using the docker driver based on user configuration
	I1002 06:36:21.700959  813918 start.go:304] selected driver: docker
	I1002 06:36:21.700986  813918 start.go:924] validating driver "docker" against <nil>
	I1002 06:36:21.701000  813918 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:36:21.701711  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.758634  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.749346343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.758811  813918 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:36:21.759085  813918 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:36:21.762043  813918 out.go:179] * Using Docker driver with root privileges
	I1002 06:36:21.764916  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:21.764987  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:21.765005  813918 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:36:21.765078  813918 start.go:348] cluster config:
	{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:21.768148  813918 out.go:179] * Starting "addons-110926" primary control-plane node in "addons-110926" cluster
	I1002 06:36:21.771007  813918 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 06:36:21.773962  813918 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:36:21.776817  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:21.776869  813918 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 06:36:21.776883  813918 cache.go:58] Caching tarball of preloaded images
	I1002 06:36:21.776920  813918 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:36:21.776978  813918 preload.go:233] Found /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 06:36:21.776988  813918 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 06:36:21.777328  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:21.777357  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json: {Name:mk2f8f9458f5bc5a3d522cc7bc03c497073f8f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:21.792651  813918 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:36:21.792805  813918 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:36:21.792830  813918 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:36:21.792839  813918 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:36:21.792848  813918 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:36:21.792856  813918 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 06:36:39.840628  813918 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 06:36:39.840677  813918 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:36:39.840706  813918 start.go:360] acquireMachinesLock for addons-110926: {Name:mk5b3ba2eb8943c76c6ef867a9f0efe000290e8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:36:39.840853  813918 start.go:364] duration metric: took 124.262µs to acquireMachinesLock for "addons-110926"
	I1002 06:36:39.840884  813918 start.go:93] Provisioning new machine with config: &{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:36:39.840959  813918 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:36:39.844345  813918 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 06:36:39.844567  813918 start.go:159] libmachine.API.Create for "addons-110926" (driver="docker")
	I1002 06:36:39.844615  813918 client.go:168] LocalClient.Create starting
	I1002 06:36:39.844744  813918 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem
	I1002 06:36:40.158293  813918 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem
	I1002 06:36:40.423695  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:36:40.439045  813918 cli_runner.go:211] docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:36:40.439144  813918 network_create.go:284] running [docker network inspect addons-110926] to gather additional debugging logs...
	I1002 06:36:40.439166  813918 cli_runner.go:164] Run: docker network inspect addons-110926
	W1002 06:36:40.454853  813918 cli_runner.go:211] docker network inspect addons-110926 returned with exit code 1
	I1002 06:36:40.454885  813918 network_create.go:287] error running [docker network inspect addons-110926]: docker network inspect addons-110926: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-110926 not found
	I1002 06:36:40.454900  813918 network_create.go:289] output of [docker network inspect addons-110926]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-110926 not found
	
	** /stderr **
	I1002 06:36:40.454994  813918 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:40.471187  813918 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3c190}
	I1002 06:36:40.471239  813918 network_create.go:124] attempt to create docker network addons-110926 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:36:40.471291  813918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-110926 addons-110926
	I1002 06:36:40.528426  813918 network_create.go:108] docker network addons-110926 192.168.49.0/24 created
	I1002 06:36:40.528461  813918 kic.go:121] calculated static IP "192.168.49.2" for the "addons-110926" container
	I1002 06:36:40.528550  813918 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:36:40.544507  813918 cli_runner.go:164] Run: docker volume create addons-110926 --label name.minikube.sigs.k8s.io=addons-110926 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:36:40.560870  813918 oci.go:103] Successfully created a docker volume addons-110926
	I1002 06:36:40.560961  813918 cli_runner.go:164] Run: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:36:42.684275  813918 cli_runner.go:217] Completed: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.123276184s)
	I1002 06:36:42.684309  813918 oci.go:107] Successfully prepared a docker volume addons-110926
	I1002 06:36:42.684338  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:42.684360  813918 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:36:42.684441  813918 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:36:47.011851  813918 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.327364513s)
	I1002 06:36:47.011897  813918 kic.go:203] duration metric: took 4.327533581s to extract preloaded images to volume ...
	W1002 06:36:47.012040  813918 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 06:36:47.012157  813918 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:36:47.062619  813918 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-110926 --name addons-110926 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-110926 --network addons-110926 --ip 192.168.49.2 --volume addons-110926:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:36:47.379291  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Running}}
	I1002 06:36:47.400798  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.426150  813918 cli_runner.go:164] Run: docker exec addons-110926 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:36:47.477926  813918 oci.go:144] the created container "addons-110926" has a running status.
	I1002 06:36:47.477953  813918 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa...
	I1002 06:36:47.781138  813918 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:36:47.806163  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.827180  813918 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:36:47.827199  813918 kic_runner.go:114] Args: [docker exec --privileged addons-110926 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:36:47.891791  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.911592  813918 machine.go:93] provisionDockerMachine start ...
	I1002 06:36:47.911695  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:47.930991  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:47.931327  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:47.931345  813918 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:36:47.931960  813918 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57194->127.0.0.1:33863: read: connection reset by peer
	I1002 06:36:51.072477  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.072569  813918 ubuntu.go:182] provisioning hostname "addons-110926"
	I1002 06:36:51.072685  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.090401  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.090720  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.090740  813918 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-110926 && echo "addons-110926" | sudo tee /etc/hostname
	I1002 06:36:51.236050  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.236138  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.258063  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.258373  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.258395  813918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-110926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-110926/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-110926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:36:51.388860  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:36:51.388887  813918 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-811293/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-811293/.minikube}
	I1002 06:36:51.388910  813918 ubuntu.go:190] setting up certificates
	I1002 06:36:51.388920  813918 provision.go:84] configureAuth start
	I1002 06:36:51.388983  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.405357  813918 provision.go:143] copyHostCerts
	I1002 06:36:51.405461  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem (1123 bytes)
	I1002 06:36:51.405586  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem (1679 bytes)
	I1002 06:36:51.405650  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem (1078 bytes)
	I1002 06:36:51.405711  813918 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem org=jenkins.addons-110926 san=[127.0.0.1 192.168.49.2 addons-110926 localhost minikube]
	I1002 06:36:51.612527  813918 provision.go:177] copyRemoteCerts
	I1002 06:36:51.612597  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:36:51.612649  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.629460  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.725298  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:36:51.743050  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:36:51.760643  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:36:51.777747  813918 provision.go:87] duration metric: took 388.803174ms to configureAuth
	I1002 06:36:51.777772  813918 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:36:51.777954  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:36:51.777961  813918 machine.go:96] duration metric: took 3.866353513s to provisionDockerMachine
	I1002 06:36:51.777968  813918 client.go:171] duration metric: took 11.933342699s to LocalClient.Create
	I1002 06:36:51.777991  813918 start.go:167] duration metric: took 11.933425856s to libmachine.API.Create "addons-110926"
	I1002 06:36:51.778000  813918 start.go:293] postStartSetup for "addons-110926" (driver="docker")
	I1002 06:36:51.778009  813918 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:36:51.778057  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:36:51.778100  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.794568  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.888438  813918 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:36:51.891559  813918 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:36:51.891587  813918 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:36:51.891598  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/addons for local assets ...
	I1002 06:36:51.891662  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/files for local assets ...
	I1002 06:36:51.891684  813918 start.go:296] duration metric: took 113.678581ms for postStartSetup
	I1002 06:36:51.891998  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.908094  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:51.908374  813918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:36:51.908417  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.924432  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.017816  813918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:36:52.022845  813918 start.go:128] duration metric: took 12.181870526s to createHost
	I1002 06:36:52.022873  813918 start.go:83] releasing machines lock for "addons-110926", held for 12.182006857s
	I1002 06:36:52.022950  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:52.040319  813918 ssh_runner.go:195] Run: cat /version.json
	I1002 06:36:52.040381  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.040643  813918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:36:52.040709  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.064673  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.078579  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.168362  813918 ssh_runner.go:195] Run: systemctl --version
	I1002 06:36:52.263150  813918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:36:52.267928  813918 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:36:52.267998  813918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:36:52.294529  813918 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 06:36:52.294574  813918 start.go:495] detecting cgroup driver to use...
	I1002 06:36:52.294607  813918 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 06:36:52.294670  813918 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 06:36:52.309592  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 06:36:52.322252  813918 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:36:52.322343  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:36:52.339306  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:36:52.357601  813918 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:36:52.498437  813918 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:36:52.636139  813918 docker.go:234] disabling docker service ...
	I1002 06:36:52.636222  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:36:52.659149  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:36:52.672149  813918 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:36:52.790045  813918 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:36:52.904510  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:36:52.917512  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:36:52.931680  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 06:36:52.940606  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 06:36:52.949651  813918 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 06:36:52.949722  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 06:36:52.958437  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.967122  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 06:36:52.975524  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.984274  813918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:36:52.992118  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 06:36:53.000891  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 06:36:53.011203  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 06:36:53.020137  813918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:36:53.027434  813918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:36:53.034538  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.146732  813918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 06:36:53.259109  813918 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 06:36:53.259213  813918 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 06:36:53.262865  813918 start.go:563] Will wait 60s for crictl version
	I1002 06:36:53.262951  813918 ssh_runner.go:195] Run: which crictl
	I1002 06:36:53.266209  813918 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:36:53.294330  813918 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 06:36:53.294471  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.317070  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.342544  813918 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 06:36:53.345439  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:53.361595  813918 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:36:53.365182  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.374561  813918 kubeadm.go:883] updating cluster {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:36:53.374681  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:53.374737  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.399251  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.399274  813918 containerd.go:534] Images already preloaded, skipping extraction
	I1002 06:36:53.399339  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.423479  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.423504  813918 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:36:53.423513  813918 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1002 06:36:53.423602  813918 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-110926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:36:53.423672  813918 ssh_runner.go:195] Run: sudo crictl info
	I1002 06:36:53.448450  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:53.448474  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:53.448496  813918 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:36:53.448523  813918 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-110926 NodeName:addons-110926 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:36:53.448665  813918 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-110926"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:36:53.448861  813918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:36:53.457671  813918 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:36:53.457745  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:36:53.466514  813918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1002 06:36:53.480222  813918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:36:53.492979  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1002 06:36:53.506618  813918 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:36:53.510443  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.519937  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.633003  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:36:53.653268  813918 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926 for IP: 192.168.49.2
	I1002 06:36:53.653291  813918 certs.go:195] generating shared ca certs ...
	I1002 06:36:53.653331  813918 certs.go:227] acquiring lock for ca certs: {Name:mk33b75296d4c02eee9bab3e9582ce8896a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:53.654149  813918 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key
	I1002 06:36:54.554249  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt ...
	I1002 06:36:54.554277  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt: {Name:mk2139057332209b98dbb746fb9a256d2b754164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554459  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key ...
	I1002 06:36:54.554470  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key: {Name:mkcae11ed523222e33231ecbd86e12b64a288b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554546  813918 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key
	I1002 06:36:54.895364  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt ...
	I1002 06:36:54.895399  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt: {Name:mke2bb76dd7b81d2d26af5e116b652209f0542b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.895600  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key ...
	I1002 06:36:54.895614  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key: {Name:mkc32897a4730ab5fb973fb69d1a38ca87d85c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.896344  813918 certs.go:257] generating profile certs ...
	I1002 06:36:54.896423  813918 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key
	I1002 06:36:54.896442  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt with IP's: []
	I1002 06:36:55.419216  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt ...
	I1002 06:36:55.419259  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: {Name:mk10e15791cbf0b0edd868b4fdb8e230e5e309e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419452  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key ...
	I1002 06:36:55.419466  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key: {Name:mk9f0a92cebc1827b3a9e95b7f53c1d4b6a59638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419563  813918 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549
	I1002 06:36:55.419584  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:36:55.722878  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 ...
	I1002 06:36:55.722908  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549: {Name:mk85eea21d417032742d45805e5f307e924f0055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.723654  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 ...
	I1002 06:36:55.723671  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549: {Name:mkf298fb25e09f690a5e28cc66f4a6b37f67e15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.724361  813918 certs.go:382] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt
	I1002 06:36:55.724446  813918 certs.go:386] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key
	I1002 06:36:55.724499  813918 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key
	I1002 06:36:55.724522  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt with IP's: []
	I1002 06:36:56.363048  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt ...
	I1002 06:36:56.363081  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt: {Name:mk4c25ab58ebf52954efb245b3c0c0d9e1c6bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.363911  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key ...
	I1002 06:36:56.363932  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key: {Name:mk7f28565479e9a862d5049acbcab89444bf5a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.364713  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:36:56.364779  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:36:56.364814  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:36:56.364842  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem (1679 bytes)
	I1002 06:36:56.365421  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:36:56.384138  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 06:36:56.402907  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:36:56.420429  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:36:56.438118  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:36:56.455787  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:36:56.473374  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:36:56.490901  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:36:56.509097  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:36:56.526744  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:36:56.539426  813918 ssh_runner.go:195] Run: openssl version
	I1002 06:36:56.545473  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:36:56.553848  813918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557589  813918 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557674  813918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.599790  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:36:56.608153  813918 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:36:56.611552  813918 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:36:56.611600  813918 kubeadm.go:400] StartCluster: {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:56.611680  813918 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 06:36:56.611736  813918 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:36:56.639982  813918 cri.go:89] found id: ""
	I1002 06:36:56.640052  813918 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:36:56.647729  813918 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:36:56.655474  813918 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:36:56.655568  813918 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:36:56.663121  813918 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:36:56.663142  813918 kubeadm.go:157] found existing configuration files:
	
	I1002 06:36:56.663221  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:36:56.670874  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:36:56.670972  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:36:56.678534  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:36:56.685938  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:36:56.685996  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:36:56.692708  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.699925  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:36:56.700015  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.707153  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:36:56.714621  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:36:56.714749  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:36:56.722338  813918 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:36:56.759248  813918 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:36:56.759571  813918 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:36:56.790582  813918 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:36:56.790657  813918 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 06:36:56.790699  813918 kubeadm.go:318] OS: Linux
	I1002 06:36:56.790763  813918 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:36:56.790820  813918 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 06:36:56.790875  813918 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:36:56.790936  813918 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:36:56.790994  813918 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:36:56.791049  813918 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:36:56.791100  813918 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:36:56.791153  813918 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:36:56.791207  813918 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 06:36:56.880850  813918 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:36:56.880966  813918 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:36:56.881067  813918 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:36:56.886790  813918 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:36:56.890544  813918 out.go:252]   - Generating certificates and keys ...
	I1002 06:36:56.890681  813918 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:36:56.890776  813918 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:36:57.277686  813918 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:36:57.698690  813918 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:36:58.123771  813918 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:36:58.316428  813918 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:36:58.712844  813918 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:36:58.713106  813918 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.412304  813918 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:36:59.412590  813918 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.506243  813918 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:37:00.458571  813918 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:37:00.702742  813918 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:37:00.703124  813918 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:37:01.245158  813918 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:37:01.470802  813918 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:37:01.723353  813918 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:37:01.786251  813918 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:37:02.286866  813918 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:37:02.287602  813918 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:37:02.290493  813918 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:37:02.293946  813918 out.go:252]   - Booting up control plane ...
	I1002 06:37:02.294063  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:37:02.294988  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:37:02.295992  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:37:02.312503  813918 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:37:02.312871  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:37:02.320595  813918 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:37:02.321016  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:37:02.321262  813918 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:37:02.457350  813918 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:37:02.457522  813918 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:37:03.461255  813918 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00198836s
	I1002 06:37:03.463308  813918 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:37:03.463532  813918 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:37:03.463645  813918 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:37:03.464191  813918 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:37:06.566691  813918 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.102303507s
	I1002 06:37:08.316492  813918 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.851816452s
	I1002 06:37:09.465139  813918 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001507743s
	I1002 06:37:09.489317  813918 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:37:09.522458  813918 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:37:09.556453  813918 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:37:09.556687  813918 kubeadm.go:318] [mark-control-plane] Marking the node addons-110926 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:37:09.572399  813918 kubeadm.go:318] [bootstrap-token] Using token: 7g41rx.fb6mqimdeeyoknq9
	I1002 06:37:09.575450  813918 out.go:252]   - Configuring RBAC rules ...
	I1002 06:37:09.575583  813918 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:37:09.580181  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:37:09.588090  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:37:09.592801  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:37:09.600582  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:37:09.607878  813918 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:37:09.872917  813918 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:37:10.299814  813918 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:37:10.872732  813918 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:37:10.874055  813918 kubeadm.go:318] 
	I1002 06:37:10.874135  813918 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:37:10.874146  813918 kubeadm.go:318] 
	I1002 06:37:10.874227  813918 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:37:10.874248  813918 kubeadm.go:318] 
	I1002 06:37:10.874283  813918 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:37:10.874350  813918 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:37:10.874409  813918 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:37:10.874417  813918 kubeadm.go:318] 
	I1002 06:37:10.874473  813918 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:37:10.874482  813918 kubeadm.go:318] 
	I1002 06:37:10.874532  813918 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:37:10.874540  813918 kubeadm.go:318] 
	I1002 06:37:10.874595  813918 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:37:10.874679  813918 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:37:10.874756  813918 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:37:10.874764  813918 kubeadm.go:318] 
	I1002 06:37:10.874852  813918 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:37:10.874936  813918 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:37:10.874945  813918 kubeadm.go:318] 
	I1002 06:37:10.875033  813918 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875146  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 \
	I1002 06:37:10.875172  813918 kubeadm.go:318] 	--control-plane 
	I1002 06:37:10.875181  813918 kubeadm.go:318] 
	I1002 06:37:10.875270  813918 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:37:10.875279  813918 kubeadm.go:318] 
	I1002 06:37:10.875365  813918 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875475  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 
	I1002 06:37:10.878324  813918 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 06:37:10.878562  813918 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 06:37:10.878676  813918 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:37:10.878697  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:37:10.878705  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:37:10.881877  813918 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 06:37:10.884817  813918 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 06:37:10.889466  813918 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 06:37:10.889488  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 06:37:10.902465  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 06:37:11.181141  813918 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:37:11.181229  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.181309  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-110926 minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-110926 minikube.k8s.io/primary=true
	I1002 06:37:11.362613  813918 ops.go:34] apiserver oom_adj: -16
	I1002 06:37:11.362717  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.863387  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.363462  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.863468  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.362840  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.863815  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.363244  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.495136  813918 kubeadm.go:1113] duration metric: took 3.313961954s to wait for elevateKubeSystemPrivileges
	I1002 06:37:14.495171  813918 kubeadm.go:402] duration metric: took 17.883574483s to StartCluster
	I1002 06:37:14.495189  813918 settings.go:142] acquiring lock: {Name:mkfabb257d5e6dc89516b7f3eecfb5ad470245b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.495908  813918 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:37:14.496318  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/kubeconfig: {Name:mk61b1a16c6c070d43ba1e4fed7f7f8861077db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.497144  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:37:14.497165  813918 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:37:14.497416  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.497447  813918 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:37:14.497542  813918 addons.go:69] Setting yakd=true in profile "addons-110926"
	I1002 06:37:14.497556  813918 addons.go:238] Setting addon yakd=true in "addons-110926"
	I1002 06:37:14.497579  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.497665  813918 addons.go:69] Setting inspektor-gadget=true in profile "addons-110926"
	I1002 06:37:14.497681  813918 addons.go:238] Setting addon inspektor-gadget=true in "addons-110926"
	I1002 06:37:14.497701  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.498032  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498105  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498760  813918 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.498784  813918 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-110926"
	I1002 06:37:14.498819  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.499233  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.504834  813918 addons.go:69] Setting metrics-server=true in profile "addons-110926"
	I1002 06:37:14.504923  813918 addons.go:238] Setting addon metrics-server=true in "addons-110926"
	I1002 06:37:14.504988  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.505608  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.507518  813918 out.go:179] * Verifying Kubernetes components...
	I1002 06:37:14.507725  813918 addons.go:69] Setting cloud-spanner=true in profile "addons-110926"
	I1002 06:37:14.507753  813918 addons.go:238] Setting addon cloud-spanner=true in "addons-110926"
	I1002 06:37:14.507795  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.508276  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.519123  813918 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.519204  813918 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-110926"
	I1002 06:37:14.519258  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523209  813918 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-110926"
	I1002 06:37:14.523335  813918 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:14.523396  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523909  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.524419  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.536906  813918 addons.go:69] Setting registry=true in profile "addons-110926"
	I1002 06:37:14.536941  813918 addons.go:238] Setting addon registry=true in "addons-110926"
	I1002 06:37:14.536983  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.537475  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.539289  813918 addons.go:69] Setting default-storageclass=true in profile "addons-110926"
	I1002 06:37:14.558568  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-110926"
	I1002 06:37:14.559019  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.559239  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:14.541208  813918 addons.go:69] Setting registry-creds=true in profile "addons-110926"
	I1002 06:37:14.561178  813918 addons.go:238] Setting addon registry-creds=true in "addons-110926"
	I1002 06:37:14.561363  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.541231  813918 addons.go:69] Setting storage-provisioner=true in profile "addons-110926"
	I1002 06:37:14.563047  813918 addons.go:238] Setting addon storage-provisioner=true in "addons-110926"
	I1002 06:37:14.563932  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.566547  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541239  813918 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-110926"
	I1002 06:37:14.579820  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-110926"
	I1002 06:37:14.580221  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.586764  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541246  813918 addons.go:69] Setting volcano=true in profile "addons-110926"
	I1002 06:37:14.607872  813918 addons.go:238] Setting addon volcano=true in "addons-110926"
	I1002 06:37:14.541349  813918 addons.go:69] Setting volumesnapshots=true in profile "addons-110926"
	I1002 06:37:14.607929  813918 addons.go:238] Setting addon volumesnapshots=true in "addons-110926"
	I1002 06:37:14.607950  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.556898  813918 addons.go:69] Setting gcp-auth=true in profile "addons-110926"
	I1002 06:37:14.624993  813918 mustload.go:65] Loading cluster: addons-110926
	I1002 06:37:14.625253  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.625626  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556924  813918 addons.go:69] Setting ingress=true in profile "addons-110926"
	I1002 06:37:14.631873  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.632366  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556929  813918 addons.go:69] Setting ingress-dns=true in profile "addons-110926"
	I1002 06:37:14.632643  813918 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:37:14.631728  813918 addons.go:238] Setting addon ingress=true in "addons-110926"
	I1002 06:37:14.633388  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.633841  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.650708  813918 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:37:14.654882  813918 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:37:14.654909  813918 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:37:14.654981  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.659338  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.671893  813918 addons.go:238] Setting addon ingress-dns=true in "addons-110926"
	I1002 06:37:14.671956  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.672451  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.681943  813918 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:37:14.682145  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:37:14.682171  813918 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:37:14.682243  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.730779  813918 addons.go:238] Setting addon default-storageclass=true in "addons-110926"
	I1002 06:37:14.730824  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.731463  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.736081  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:37:14.743901  813918 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:14.748859  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:37:14.749029  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.798861  813918 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:37:14.801456  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:37:14.801501  813918 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:37:14.801637  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.840051  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:37:14.844935  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:37:14.848913  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:37:14.851733  813918 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:37:14.854520  813918 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:37:14.857638  813918 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:37:14.858717  813918 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:14.858738  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:37:14.858817  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.860526  813918 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:37:14.860546  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:37:14.860632  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.893874  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:37:14.894058  813918 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:37:14.897434  813918 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:14.897458  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:37:14.897547  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.918428  813918 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-110926"
	I1002 06:37:14.918472  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.918875  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.921121  813918 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:37:14.925950  813918 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:14.925974  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:37:14.926042  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.945293  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.949541  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:37:14.956438  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:37:14.957575  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.966829  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:37:14.967843  813918 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:37:14.983357  813918 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 06:37:14.991256  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:37:14.991531  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:37:14.991690  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.992663  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:14.992678  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:37:14.992742  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.996512  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:14.996904  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:37:14.996921  813918 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:37:14.996989  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.005391  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:37:15.005812  813918 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:37:15.006640  813918 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:15.006661  813918 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:37:15.006739  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.008284  813918 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 06:37:15.009342  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:37:15.009438  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:37:15.009541  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.028005  813918 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 06:37:15.033152  813918 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:15.033183  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 06:37:15.033275  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.054617  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.055541  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:15.055750  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:15.055763  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:37:15.055832  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.061085  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.061106  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:37:15.061173  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.074564  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.081642  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.111200  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.136860  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:15.148801  813918 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:37:15.151741  813918 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:37:15.156261  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:15.156284  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:37:15.156355  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.169924  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.193516  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.199715  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.214370  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237018  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237601  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.243930  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.244071  813918 retry.go:31] will retry after 305.561491ms: ssh: handshake failed: EOF
	I1002 06:37:15.251932  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.255879  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.259811  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.259836  813918 retry.go:31] will retry after 210.072349ms: ssh: handshake failed: EOF
	I1002 06:37:15.265683  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.272079  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.565323  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.565348  813918 retry.go:31] will retry after 243.153386ms: ssh: handshake failed: EOF
	I1002 06:37:15.846286  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:37:15.846311  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:37:15.944527  813918 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:15.944599  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:37:15.970354  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.985885  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:16.012665  813918 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:37:16.012693  813918 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:37:16.019458  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:37:16.019485  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:37:16.043516  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:37:16.043539  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:37:16.060218  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:16.072624  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:16.090843  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:16.096286  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:37:16.096364  813918 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:37:16.184119  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:16.205029  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:16.206409  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:16.211099  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:16.221140  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:16.281478  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:37:16.281550  813918 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:37:16.294235  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:37:16.294308  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:37:16.314044  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:37:16.314122  813918 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:37:16.314878  813918 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.314923  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:37:16.334271  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.435552  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:37:16.435625  813918 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:37:16.486137  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:16.508790  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.508817  813918 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:37:16.527074  813918 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.79094086s)
	I1002 06:37:16.527103  813918 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 06:37:16.527172  813918 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.390287567s)
	I1002 06:37:16.527930  813918 node_ready.go:35] waiting up to 6m0s for node "addons-110926" to be "Ready" ...
	I1002 06:37:16.692302  813918 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.692321  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:37:16.739744  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:37:16.739768  813918 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:37:16.803024  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.866551  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.918292  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:16.918317  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:37:16.976907  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:37:16.976934  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:37:17.032696  813918 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-110926" context rescaled to 1 replicas
	I1002 06:37:17.174089  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:37:17.174115  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:37:17.194531  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:17.590550  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:37:17.590575  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:37:17.985718  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:37:17.985751  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:37:18.258016  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:37:18.258042  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:37:18.426273  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:37:18.426298  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	W1002 06:37:18.558468  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:18.892311  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:37:18.892338  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:37:19.094159  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:37:19.094182  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:37:19.262380  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:37:19.262404  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:37:19.445644  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:37:19.445669  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:37:19.720946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 06:37:21.041084  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:21.578538  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.608100964s)
	I1002 06:37:21.578618  813918 addons.go:479] Verifying addon ingress=true in "addons-110926"
	I1002 06:37:21.579021  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.5930618s)
	I1002 06:37:21.579193  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.518951153s)
	I1002 06:37:21.579261  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.506611096s)
	I1002 06:37:21.582085  813918 out.go:179] * Verifying ingress addon...
	I1002 06:37:21.586543  813918 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:37:21.655191  813918 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:37:21.655263  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.115015  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.583411  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:37:22.583564  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.610354  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:22.612089  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.737638  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:37:22.767377  813918 addons.go:238] Setting addon gcp-auth=true in "addons-110926"
	I1002 06:37:22.767434  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:22.767894  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:22.793827  813918 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:37:22.793887  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.830306  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:23.096079  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:23.101826  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.167688  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (7.0767591s)
	I1002 06:37:23.167794  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.983606029s)
	W1002 06:37:23.167817  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167835  813918 retry.go:31] will retry after 146.597414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167865  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.9627765s)
	I1002 06:37:23.167924  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.96145652s)
	I1002 06:37:23.167989  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.956824802s)
	I1002 06:37:23.168168  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.946960517s)
	I1002 06:37:23.168215  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.833882657s)
	I1002 06:37:23.168229  813918 addons.go:479] Verifying addon registry=true in "addons-110926"
	I1002 06:37:23.168432  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.682270471s)
	I1002 06:37:23.168504  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.365459957s)
	I1002 06:37:23.168515  813918 addons.go:479] Verifying addon metrics-server=true in "addons-110926"
	I1002 06:37:23.168593  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.302013657s)
	W1002 06:37:23.168612  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168628  813918 retry.go:31] will retry after 145.945512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168670  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.974112429s)
	I1002 06:37:23.171600  813918 out.go:179] * Verifying registry addon...
	I1002 06:37:23.175423  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:37:23.175675  813918 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-110926 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:37:23.215812  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.494815173s)
	I1002 06:37:23.215842  813918 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:23.218592  813918 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:37:23.218725  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:23.222422  813918 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:37:23.223098  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:37:23.225306  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:37:23.225336  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:37:23.265230  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:37:23.265257  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:37:23.271284  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:23.271303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.301079  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.301100  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:37:23.315262  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:23.315479  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:23.362438  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:23.362461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:23.371447  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.590215  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.690482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.726143  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.091791  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.192769  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.240956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.605709  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.703226  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.726522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.936549  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.621028233s)
	I1002 06:37:24.936718  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.621420893s)
	W1002 06:37:24.936789  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936837  813918 retry.go:31] will retry after 561.608809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936908  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.565434855s)
	I1002 06:37:24.939978  813918 addons.go:479] Verifying addon gcp-auth=true in "addons-110926"
	I1002 06:37:24.944986  813918 out.go:179] * Verifying gcp-auth addon...
	I1002 06:37:24.948596  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:37:24.951413  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:37:24.951434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.090748  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.178550  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.226439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.452219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.499574  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:25.531518  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:25.589865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.726612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.951542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.090750  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.179030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.226732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:26.317076  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.317226  813918 retry.go:31] will retry after 583.727209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.452148  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.589788  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.683078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.727068  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:26.901144  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:26.952896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:27.091613  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:27.226561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.451348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:27.531649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:27.591525  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.683031  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:27.712297  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.712326  813918 retry.go:31] will retry after 648.169313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.726104  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.090463  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.191332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.226482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.360900  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:28.452621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.684494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.726619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.952459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.090817  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.180514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:29.185770  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.185799  813918 retry.go:31] will retry after 638.486695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.226864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.451636  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.589804  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.683512  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:29.726574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.824932  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:29.952114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:30.032649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:30.090885  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.179094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:30.226154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.452508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:30.592222  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.684732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:30.698805  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.698840  813918 retry.go:31] will retry after 1.386655025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.726921  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.951637  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.090673  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.178664  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.226447  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.590331  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.726337  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.952229  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:32.086627  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:32.090653  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.179238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.226721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:32.452452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:32.530986  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:32.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.683805  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:32.884199  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.884242  813918 retry.go:31] will retry after 1.764941661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.090182  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.226874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.451508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.590092  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.725974  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.951836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:34.090782  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.178819  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.226525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.452486  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:34.531295  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:34.590650  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.649946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:34.686870  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.726748  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.952390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:35.093119  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.179530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.226048  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.451917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:35.484501  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.484530  813918 retry.go:31] will retry after 6.007881753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.590705  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.683551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.726503  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.952327  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.090688  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.191481  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.226150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.452471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.590726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.683932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.727072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.951909  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:37.032811  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:37.090041  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.226683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.451377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:37.590155  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.683502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.726422  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.951666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.226290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.452969  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.589891  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.678445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.726426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.951569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:39.090363  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.178554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.226682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.451688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:39.531480  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:39.589495  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.683560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.726605  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.951696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.090353  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.179467  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.226430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.451667  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.590213  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.682834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.726735  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.951452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.090424  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.178251  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.225935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.493320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:41.531920  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:41.590388  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.727080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.951832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.097513  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.180007  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.228335  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:42.397373  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.397404  813918 retry.go:31] will retry after 6.331757331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.452908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.590432  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.683443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.726508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:42.952318  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.090165  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.178978  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.225896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.451987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.590602  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.678528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.726661  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.951424  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:44.031312  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:44.090520  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.226569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.451727  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:44.596784  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.697937  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.726640  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.951415  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.090703  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.179490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.227523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.451631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.589687  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.683601  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.727673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.951624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:46.031927  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:46.090068  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.178708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.226451  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:46.590533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.678457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.726355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.952193  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.090132  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.179505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.226590  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.451700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.590360  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.683040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.725863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.952219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:48.090642  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.178440  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.226648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.451752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:48.531666  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:48.590304  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.678358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.726321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.729320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:48.951489  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:49.091175  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.180116  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.226101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.452407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:49.530266  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.530298  813918 retry.go:31] will retry after 12.414314859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.590599  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.683495  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.726800  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.951645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.090598  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.178639  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.226627  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.451589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.590544  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.682812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.726927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.951882  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:51.030659  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:51.089892  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.179276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.225934  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:51.589726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.683005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.725957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.951996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.091773  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.178278  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.226119  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.451977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.590251  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.683413  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.726061  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.952248  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:53.031163  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:53.090127  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.178995  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.227062  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.452030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:53.590043  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.683319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.726034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.951951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.090498  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.178558  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.226461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.451500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.590406  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.683724  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.726962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.952006  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:55.031442  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:55.091214  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.179018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.225804  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:55.590030  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.682894  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.726632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.951851  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.090254  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.179316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.225963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.451980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.589903  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.726710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.969890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.039661  813918 node_ready.go:49] node "addons-110926" is "Ready"
	I1002 06:37:57.039759  813918 node_ready.go:38] duration metric: took 40.511800003s for node "addons-110926" to be "Ready" ...
	I1002 06:37:57.039788  813918 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:57.039875  813918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.093303  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.094841  813918 api_server.go:72] duration metric: took 42.597646349s to wait for apiserver process to appear ...
	I1002 06:37:57.094869  813918 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:37:57.094891  813918 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 06:37:57.110477  813918 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 06:37:57.112002  813918 api_server.go:141] control plane version: v1.34.1
	I1002 06:37:57.112039  813918 api_server.go:131] duration metric: took 17.162356ms to wait for apiserver health ...
	I1002 06:37:57.112050  813918 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:37:57.164751  813918 system_pods.go:59] 19 kube-system pods found
	I1002 06:37:57.164836  813918 system_pods.go:61] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.164843  813918 system_pods.go:61] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.164850  813918 system_pods.go:61] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.164855  813918 system_pods.go:61] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.164860  813918 system_pods.go:61] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.164866  813918 system_pods.go:61] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.164895  813918 system_pods.go:61] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.164906  813918 system_pods.go:61] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.164911  813918 system_pods.go:61] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.164915  813918 system_pods.go:61] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.164927  813918 system_pods.go:61] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.164931  813918 system_pods.go:61] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.164936  813918 system_pods.go:61] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.164940  813918 system_pods.go:61] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.164952  813918 system_pods.go:61] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.164956  813918 system_pods.go:61] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.164969  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.164978  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.164984  813918 system_pods.go:61] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.164996  813918 system_pods.go:74] duration metric: took 52.940352ms to wait for pod list to return data ...
	I1002 06:37:57.165020  813918 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:37:57.180144  813918 default_sa.go:45] found service account: "default"
	I1002 06:37:57.180178  813918 default_sa.go:55] duration metric: took 15.149731ms for default service account to be created ...
	I1002 06:37:57.180188  813918 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:37:57.222552  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:57.222577  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.223365  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.223410  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.223418  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.223424  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.223428  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.223442  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.223456  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.223462  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.223474  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.223481  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.223485  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.223492  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.223496  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.223503  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.223507  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.223510  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.223514  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.223521  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.223531  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.223536  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.223550  813918 retry.go:31] will retry after 203.421597ms: missing components: kube-dns
	I1002 06:37:57.317769  813918 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:57.317813  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.437762  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.437803  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.437810  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.437815  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.437821  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.437826  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.437841  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.437853  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.437869  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.437874  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.437877  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.437882  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.437900  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.437905  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.437909  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.437913  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.437926  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.437937  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437946  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437955  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.437969  813918 retry.go:31] will retry after 264.460556ms: missing components: kube-dns
	I1002 06:37:57.457586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.591211  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.684302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.707934  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.707975  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.707990  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:57.708000  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:57.708018  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.708030  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.708035  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.708040  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.708051  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.708113  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:57.708129  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.708172  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.708184  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:57.708195  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:57.708207  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:57.708220  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:57.708228  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:57.708247  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708255  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708270  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:57.708285  813918 retry.go:31] will retry after 422.985157ms: missing components: kube-dns
	I1002 06:37:57.742917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.952834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.091317  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.137271  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.137312  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:58.137322  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.137331  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.137338  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.137342  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.137350  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.137355  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.137359  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.137366  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.137375  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.137380  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.137386  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.137399  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.137411  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.137417  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.137426  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.137433  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137444  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137451  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.137467  813918 retry.go:31] will retry after 586.146569ms: missing components: kube-dns
	I1002 06:37:58.178407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.235878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.452723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.614086  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.705574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.752782  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.752871  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Running
	I1002 06:37:58.752902  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.752951  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.752984  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.753015  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.753040  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.753071  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.753100  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.753128  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.753156  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.753185  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.753215  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.753246  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.753287  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.753323  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.753344  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.753369  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753402  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753429  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.753455  813918 system_pods.go:126] duration metric: took 1.573257013s to wait for k8s-apps to be running ...
	I1002 06:37:58.753478  813918 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:37:58.753557  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:37:58.756092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.811373  813918 system_svc.go:56] duration metric: took 57.886892ms WaitForService to wait for kubelet
	I1002 06:37:58.811449  813918 kubeadm.go:586] duration metric: took 44.314256903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:58.811493  813918 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:37:58.822249  813918 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 06:37:58.822353  813918 node_conditions.go:123] node cpu capacity is 2
	I1002 06:37:58.822383  813918 node_conditions.go:105] duration metric: took 10.860686ms to run NodePressure ...
	I1002 06:37:58.822420  813918 start.go:241] waiting for startup goroutines ...
	I1002 06:37:58.952958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.090849  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.194378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.293675  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.453551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.590199  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.683743  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.727149  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.952566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.095335  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.179662  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.233910  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.456053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.590708  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.683163  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.726621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.952293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.091005  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.179669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.229085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.453177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.591279  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.686492  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.728097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.945617  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:01.952810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.090686  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.179657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.228561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:02.452023  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.591508  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.683154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.726517  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:38:02.824299  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.824331  813918 retry.go:31] will retry after 15.691806375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.952380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.090609  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.178940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.227145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.453458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.590296  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.683856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.728071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.952283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.091664  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.192092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.226458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.451525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.589908  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.730121  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.952803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.091341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.179246  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.227241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.453166  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.590701  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.678855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.729441  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.955761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.089976  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.179542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.229669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.451663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.590195  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.684205  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.784414  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.952931  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.090633  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.179271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.226645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.590940  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.683125  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.726314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.958423  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.089866  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.178562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.226685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.452416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.589770  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.683752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.726663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.952521  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.090474  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.179170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.227253  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.453357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.593377  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.684130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.728107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.951741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.090984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.181589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.227685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.590276  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.684315  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.726459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.951730  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.094349  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.181744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.226987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.452812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.589905  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.684532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.727310  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.952952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.090716  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.178859  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.227650  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.452172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.590288  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.684016  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.727454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.952912  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.089873  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.179357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.226476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.452233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.590829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.683018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.727319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.952542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.091679  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.180387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.229029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.453283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.593239  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.684343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.727726  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.951591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.090426  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.452049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.591161  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.726700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.951767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.090224  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.179552  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.230312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.452584  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.590173  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.728540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.952802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.089859  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.227103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.452592  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.589995  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.683737  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.727124  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.952069  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.090149  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.227063  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.452179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.516517  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:18.591793  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.683303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.726902  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.951881  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:19.090407  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.179390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.280453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.453053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:38:19.506255  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.506287  813918 retry.go:31] will retry after 24.46264979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.591253  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.683612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.727161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.951604  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.090820  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.179282  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.226653  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.451718  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.590946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.683133  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.726532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.952036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.090532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.179243  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.227567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.452954  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.590813  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.683988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.726704  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.955708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.090204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.179312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.226758  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.451702  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.590436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.726810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.952518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.090640  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.226432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.452557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.589536  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.726387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.951660  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.089946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.179032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.231204  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.452096  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.591481  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.684150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.727560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.951946  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.090564  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.180720  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.227767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.452182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.590552  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.683982  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.727145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.952505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.096097  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.199167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.227457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.589950  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.682877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.728464  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.952825  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.090029  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.178693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.227164  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.451877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.694494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.726681  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.953022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.090718  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.178712  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.226849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.451699  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.590634  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.680358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.727806  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.952386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.090865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.192262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.296040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.458956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.592945  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.696528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.727745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.960224  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.108669  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.181176  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.229077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.453626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.590233  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.688386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.962237  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.091531  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.180490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.229509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.452749  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.591491  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.726970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.952445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.091436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.190896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.228381  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.452736  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.590064  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.684030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.726390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.951770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.090909  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.228094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.452528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.590375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.684236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.727041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.952649  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:34.090690  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.178430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.227390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:34.452820  813918 kapi.go:107] duration metric: took 1m9.5042235s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 06:38:34.456518  813918 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-110926 cluster.
	I1002 06:38:34.459299  813918 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 06:38:34.462514  813918 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 06:38:34.590456  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.683783  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.726876  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.091815  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.192181  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.225996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.590532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.683177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.727077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.090514  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.178631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.226657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.590586  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.684420  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.726745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.090769  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.193241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.227067  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.591255  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.682734  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.727297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.089746  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.178757  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.227287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.591547  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.691271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.727108  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.106229  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.202273  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.228516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.589988  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.679442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.726895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.094511  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.179452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.237240  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.601942  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.693742  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.738619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.091045  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.191515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.226632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.591721  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.726863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.091861  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.204238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.590297  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.683271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.727579  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.091018  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.179103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.226868  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.591731  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.684032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.726500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.969756  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:44.090261  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.179366  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.228188  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:44.592341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.686940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.727784  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.092283  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.178091  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.20829608s)
	W1002 06:38:45.178208  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.178250  813918 retry.go:31] will retry after 22.26617142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.179543  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.236432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.590441  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.679320  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.727621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:46.090405  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.178426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:46.226663  813918 kapi.go:107] duration metric: took 1m23.00356106s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 06:38:46.589619  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.683261  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.089734  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.179374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.592660  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.090007  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.178644  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.591375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.683509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.089829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.591248  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.691276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.089984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.179171  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.590696  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.683346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.089635  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.590723  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.683306  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.090482  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.190696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.678787  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.090135  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.179421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.590204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.684303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.089742  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.178289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.591054  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.692841  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.091556  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.590831  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.682962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.590836  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.683198  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.090570  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.683132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.089540  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.179053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.590839  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.683962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.090850  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.190988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.590732  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.685032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.114597  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.198802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.590774  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.683043  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.090771  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.178723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.590300  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.684480  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.091506  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.180050  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.591681  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.686987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.092104  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.180518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.590550  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.684084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.091333  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.685968  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.091208  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.179114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.593116  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.693180  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.099807  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:06.192434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.591063  813918 kapi.go:107] duration metric: took 1m45.004516868s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 06:39:06.691162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.178929  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.445436  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:39:07.683258  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:08.179496  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:39:08.321958  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:39:08.322050  813918 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:39:08.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.179353  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.686227  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.179510  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.179458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.679580  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.179918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.684042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.178652  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.685874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.179294  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.688744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.178402  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.684134  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.178182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.682141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.179203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.684865  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.183409  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.683201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.178867  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.679950  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.179378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.683751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.179070  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.679127  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.178339  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.682554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.179809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.684571  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.684796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.178633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.683087  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.178740  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.683803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.178621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.679141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.178920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.684290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.179325  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.680059  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.180120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.683936  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.178444  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.683250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.684538  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.179130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.684267  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.179364  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.684136  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.178488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.683770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.179826  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.683998  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.179895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.683890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.180914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.686625  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.179680  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.684314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.178731  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.682866  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.180532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.685515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.178761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.678674  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.180677  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.683093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.682966  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.178714  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.687786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.180034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.682439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.179416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.685544  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.179302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.685100  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.179287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.683778  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.179021  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.679097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.178970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.684700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.179476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.684994  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.178796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.679165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.178666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.684967  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.178854  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.678696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.179624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.683296  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.180450  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.687218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.195539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.689354  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.178732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.685212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.179265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.678460  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.686281  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.179400  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.679175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.179017  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.683057  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.179262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.684658  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.179829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.683098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.178903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.686212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.179744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.682952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.685085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.179154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.683453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.179437  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.683490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.179250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.684690  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.179775  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.684387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.678523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.179146  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.179689  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.682903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.178772  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.685172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.178915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.684537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.178688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.681514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.179537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.683064  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.682793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.179279  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.685175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.178553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.683682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.179629  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.679433  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.178986  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.683516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.178938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.684313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.179037  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.682849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.180161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.683924  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.178283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.683997  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.179049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.685786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.179179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.682830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.179638  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.683135  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.178744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.684184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.179123  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.179078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.682970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.179304  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.684431  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.179468  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.683907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.178963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.684491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.180147  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.678812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.178520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.679177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.178790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.684374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.179855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.684397  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.179055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.685615  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.178939  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.680235  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.178829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.682679  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.179766  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.686979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.178641  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.683095  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.178582  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.682578  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.179361  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.684019  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.683211  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.180830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.685818  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.179776  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.683755  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.179597  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.683541  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.178536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.679350  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.183218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.683948  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.179617  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.681398  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.200089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.683523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.180022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.682762  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.179798  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.179630  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.683920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.178316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.686534  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.179292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.683293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.178370  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.682944  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.178545  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.685071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.179215  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.684453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.688380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.179014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.682840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.179693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.179386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.679132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.178565  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.680539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.684344  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.179591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.679368  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.178436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.683864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.180546  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.683586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.679618  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.684107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.684034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.178317  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.684041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.178322  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.683407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.179139  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.683117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.178439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.685938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.178476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.683871  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.178257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.684421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.178363  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.684075  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.178491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.684622  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.679029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.179857  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.684822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.178471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.682266  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.178454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.683741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.684238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.179255  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.685850  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.179285  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.684332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.178487  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.679840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.178710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.684329  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.179191  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.685465  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.179295  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.684802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.179488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.683626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.179090  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.683827  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.211958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.683203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.179683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.684688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.179790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.684540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.179257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.684514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.683477  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.178765  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.684151  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.179311  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.684698  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.179522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.684199  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.178816  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.683369  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.178888  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.683785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.179801  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.684918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.179419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.686564  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.179115  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.679606  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.179733  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.178170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.679142  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.679408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.685245  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.184879  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.679309  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.180107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.685443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.178916  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.682980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.178340  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.685958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.178346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.678858  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.179520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.685162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.178663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.683927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.178987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.683518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.179084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.685719  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.178949  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.179144  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.678751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.178975  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.685293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.178566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.682732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.686648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.178770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.682752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.179886  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.683072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.178408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.683343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.179005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.679908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.178619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.685331  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.179236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.683822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.179233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.684864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.684351  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.180700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.683915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.179907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.683172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.178856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.683739  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.179113  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.684228  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.178497  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.680321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.178685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.684377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.178668  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.683298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.679836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.179289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.179308  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.685527  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.179502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.682722  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.179247  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.691933  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.684101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.178537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.679390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.679292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.178807  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.679635  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.179574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.685788  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.179536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.679723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.178926  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.205259  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.678979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.178844  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.684358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.178792  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.680055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.183250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.685665  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.179382  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.179323  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.179642  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.678672  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.179393  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.688221  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.178875  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.683313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.178669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.679683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.179098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.681721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.181436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.683878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.179394  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.682260  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.179274  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.679117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.178213  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.684682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.679759  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.179473  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.683157  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.178763  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.679298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.179659  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.683416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.179914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.684427  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.178932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.684548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.179404  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.683536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.179167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.685131  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.178507  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.683442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.679774  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.179201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.679574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.683089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.178834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.684250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.178466  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.684419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.680107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.178342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.179349  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.685184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.178165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.179446  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.683832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.683553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.179562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.179009  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.684080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.682553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.179297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.683633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.176387  813918 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1002 06:43:23.176421  813918 kapi.go:107] duration metric: took 6m0.001003242s to wait for kubernetes.io/minikube-addons=registry ...
	W1002 06:43:23.176505  813918 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1002 06:43:23.179649  813918 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, default-storageclass, volcano, nvidia-device-plugin, storage-provisioner, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, gcp-auth, csi-hostpath-driver, ingress
	I1002 06:43:23.182525  813918 addons.go:514] duration metric: took 6m8.685068561s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner default-storageclass volcano nvidia-device-plugin storage-provisioner registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots gcp-auth csi-hostpath-driver ingress]
	I1002 06:43:23.182578  813918 start.go:246] waiting for cluster config update ...
	I1002 06:43:23.182605  813918 start.go:255] writing updated cluster config ...
	I1002 06:43:23.182910  813918 ssh_runner.go:195] Run: rm -f paused
	I1002 06:43:23.186967  813918 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:23.191359  813918 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.195909  813918 pod_ready.go:94] pod "coredns-66bc5c9577-s68lt" is "Ready"
	I1002 06:43:23.195939  813918 pod_ready.go:86] duration metric: took 4.553514ms for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.198221  813918 pod_ready.go:83] waiting for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.202513  813918 pod_ready.go:94] pod "etcd-addons-110926" is "Ready"
	I1002 06:43:23.202537  813918 pod_ready.go:86] duration metric: took 4.291712ms for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.204756  813918 pod_ready.go:83] waiting for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.208864  813918 pod_ready.go:94] pod "kube-apiserver-addons-110926" is "Ready"
	I1002 06:43:23.208890  813918 pod_ready.go:86] duration metric: took 4.040561ms for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.211197  813918 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.591502  813918 pod_ready.go:94] pod "kube-controller-manager-addons-110926" is "Ready"
	I1002 06:43:23.591528  813918 pod_ready.go:86] duration metric: took 380.304031ms for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.792134  813918 pod_ready.go:83] waiting for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.192193  813918 pod_ready.go:94] pod "kube-proxy-4zvzf" is "Ready"
	I1002 06:43:24.192225  813918 pod_ready.go:86] duration metric: took 400.063711ms for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.391575  813918 pod_ready.go:83] waiting for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791416  813918 pod_ready.go:94] pod "kube-scheduler-addons-110926" is "Ready"
	I1002 06:43:24.791440  813918 pod_ready.go:86] duration metric: took 399.838153ms for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791453  813918 pod_ready.go:40] duration metric: took 1.604452407s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:24.848923  813918 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 06:43:24.852286  813918 out.go:179] * Done! kubectl is now configured to use "addons-110926" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	7ea6ef3ed6469       34941828a2b36       24 seconds ago       Running             volcano-scheduler                        4                   c42f5288930af       volcano-scheduler-76c996c8bf-jt89z         volcano-system
	8db7a8fd91b3a       bc6bf68f85c70       About a minute ago   Running             registry                                 0                   2574946f7674b       registry-66898fdd98-926mp                  kube-system
	84db3aa9fdfd8       34941828a2b36       2 minutes ago        Exited              volcano-scheduler                        3                   c42f5288930af       volcano-scheduler-76c996c8bf-jt89z         volcano-system
	269a2ed3c2fd5       ee2d2acdca412       11 minutes ago       Running             volcano-controllers                      0                   00ff011dd89f9       volcano-controllers-6fd4f85cb8-gszhm       volcano-system
	5f7d9891cc455       5ed383cb88c34       16 minutes ago       Running             controller                               0                   6c717320771c8       ingress-nginx-controller-9cc49f96f-srz99   ingress-nginx
	0308d38377e11       ee6d597e62dc8       16 minutes ago       Running             csi-snapshotter                          0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	25fa4fdbd3104       642ded511e141       16 minutes ago       Running             csi-provisioner                          0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	fc252b8568f42       922312104da8a       16 minutes ago       Running             liveness-probe                           0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	335d72204c3f1       08f6b2990811a       16 minutes ago       Running             hostpath                                 0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	5f68f17265ee3       deda3ad36c19b       16 minutes ago       Running             gadget                                   0                   4c1a07ae3ab5b       gadget-5sxf6                               gadget
	0e5a160912072       0107d56dbc0be       16 minutes ago       Running             node-driver-registrar                    0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	f3674d941cc54       9c8d328e7d9e8       16 minutes ago       Running             gcp-auth                                 0                   c0f1e36443f43       gcp-auth-78565c9fb4-7q2v6                  gcp-auth
	739d12f7cb55c       c67c707f59d87       16 minutes ago       Exited              patch                                    0                   bb748c608a5b6       ingress-nginx-admission-patch-bq878        ingress-nginx
	591026b1dba39       c67c707f59d87       16 minutes ago       Exited              create                                   0                   cb5a57455ef86       ingress-nginx-admission-create-lw8gl       ingress-nginx
	54cf7611bdf67       bc6c1e09a843d       16 minutes ago       Running             metrics-server                           0                   7dd5efc48ed3c       metrics-server-85b7d694d7-fg8z6            kube-system
	f6cb9c538a386       4d1e5c3e97420       16 minutes ago       Running             volume-snapshot-controller               0                   67891e8bc00da       snapshot-controller-7d9fbc56b8-xwmkw       kube-system
	0c9bf13466bdb       9a80d518f102c       16 minutes ago       Running             csi-attacher                             0                   292886057d9ec       csi-hostpath-attacher-0                    kube-system
	99c1411ad7ad7       7ce2150c8929b       16 minutes ago       Running             local-path-provisioner                   0                   b598bb55c93b6       local-path-provisioner-648f6765c9-xvgcs    local-path-storage
	5ad865f2d99af       7b85e0fbfd435       16 minutes ago       Running             registry-proxy                           0                   f2c6d58f83a8d       registry-proxy-bqxnl                       kube-system
	2b58aa20e457e       4d1e5c3e97420       16 minutes ago       Running             volume-snapshot-controller               0                   2f3e4307f0508       snapshot-controller-7d9fbc56b8-69zvz       kube-system
	e6021edb430f3       ccf6033de1d3c       16 minutes ago       Running             cloud-spanner-emulator                   0                   28bd5150ab50b       cloud-spanner-emulator-85f6b7fc65-zwxnx    default
	d702743f5b9ee       2beb1e66d58ad       16 minutes ago       Running             nvidia-device-plugin-ctr                 0                   fd8594aeaa0a0       nvidia-device-plugin-daemonset-pptng       kube-system
	01c56e6095ea5       487fa743e1e22       17 minutes ago       Running             csi-resizer                              0                   507c852501681       csi-hostpath-resizer-0                     kube-system
	9ba807329b10c       1461903ec4fe9       17 minutes ago       Running             csi-external-health-monitor-controller   0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	8e92aa4c64abd       77bdba588b953       17 minutes ago       Running             yakd                                     0                   b01eaba46b9ca       yakd-dashboard-5ff678cb9-22kn4             yakd-dashboard
	4829c9264d5b3       ba04bb24b9575       17 minutes ago       Running             storage-provisioner                      0                   cd62db6aa4ca0       storage-provisioner                        kube-system
	d607380a0ea95       138784d87c9c5       17 minutes ago       Running             coredns                                  0                   97bcb21e01196       coredns-66bc5c9577-s68lt                   kube-system
	001c4797204fc       b1a8c6f707935       17 minutes ago       Running             kindnet-cni                              0                   a8dbd581dae29       kindnet-zb4h8                              kube-system
	205ba78bdcdf4       05baa95f5142d       17 minutes ago       Running             kube-proxy                               0                   1c95f15f187e7       kube-proxy-4zvzf                           kube-system
	7d5d1641aee07       43911e833d64d       18 minutes ago       Running             kube-apiserver                           0                   111e5d5f57119       kube-apiserver-addons-110926               kube-system
	b56ea6dbe0e21       b5f57ec6b9867       18 minutes ago       Running             kube-scheduler                           0                   740338c713381       kube-scheduler-addons-110926               kube-system
	dd74ed9d21ed1       7eb2c6ff0c5a7       18 minutes ago       Running             kube-controller-manager                  0                   408527a4c051e       kube-controller-manager-addons-110926      kube-system
	8be3089b4391b       a1894772a478e       18 minutes ago       Running             etcd                                     0                   f832da367e6b5       etcd-addons-110926                         kube-system
	
	
	==> containerd <==
	Oct 02 06:54:08 addons-110926 containerd[753]: time="2025-10-02T06:54:08.645450333Z" level=info msg="PullImage \"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\""
	Oct 02 06:54:08 addons-110926 containerd[753]: time="2025-10-02T06:54:08.647528211Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:54:08 addons-110926 containerd[753]: time="2025-10-02T06:54:08.649470340Z" level=info msg="CreateContainer within sandbox \"2574946f7674bbe222d7a819a24bf40e2fc32270c646297a3e927c0f3dd56e74\" for container &ContainerMetadata{Name:registry,Attempt:0,}"
	Oct 02 06:54:08 addons-110926 containerd[753]: time="2025-10-02T06:54:08.673336232Z" level=info msg="CreateContainer within sandbox \"2574946f7674bbe222d7a819a24bf40e2fc32270c646297a3e927c0f3dd56e74\" for &ContainerMetadata{Name:registry,Attempt:0,} returns container id \"8db7a8fd91b3a069d4360e0bd06133fd08bff555163c3fb5830cdeeebf348b44\""
	Oct 02 06:54:08 addons-110926 containerd[753]: time="2025-10-02T06:54:08.674929727Z" level=info msg="StartContainer for \"8db7a8fd91b3a069d4360e0bd06133fd08bff555163c3fb5830cdeeebf348b44\""
	Oct 02 06:54:08 addons-110926 containerd[753]: time="2025-10-02T06:54:08.754523553Z" level=info msg="StartContainer for \"8db7a8fd91b3a069d4360e0bd06133fd08bff555163c3fb5830cdeeebf348b44\" returns successfully"
	Oct 02 06:54:08 addons-110926 containerd[753]: time="2025-10-02T06:54:08.782279913Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:54:09 addons-110926 containerd[753]: time="2025-10-02T06:54:09.059901508Z" level=error msg="PullImage \"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\" failed" error="failed to pull and unpack image \"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:54:09 addons-110926 containerd[753]: time="2025-10-02T06:54:09.060009459Z" level=info msg="stop pulling image docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: active requests=0, bytes read=11047"
	Oct 02 06:54:30 addons-110926 containerd[753]: time="2025-10-02T06:54:30.247117894Z" level=info msg="PullImage \"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\""
	Oct 02 06:54:30 addons-110926 containerd[753]: time="2025-10-02T06:54:30.249610710Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:54:30 addons-110926 containerd[753]: time="2025-10-02T06:54:30.385224823Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:54:30 addons-110926 containerd[753]: time="2025-10-02T06:54:30.660522557Z" level=error msg="PullImage \"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\" failed" error="failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:54:30 addons-110926 containerd[753]: time="2025-10-02T06:54:30.660578243Z" level=info msg="stop pulling image docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: active requests=0, bytes read=11047"
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.246584015Z" level=info msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\""
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.248998868Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.371110599Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.396405873Z" level=info msg="ImageUpdate event name:\"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.398295491Z" level=info msg="stop pulling image docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34: active requests=0, bytes read=5431"
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.400092008Z" level=info msg="Pulled image \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\" with image id \"sha256:34941828a2b362c9455a3c1c1fc85208c1ff8c984ebb01674cc6a92d8aa787ef\", repo tag \"\", repo digest \"docker.io/volcanosh/vc-scheduler@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\", size \"44618004\" in 153.456106ms"
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.400132860Z" level=info msg="PullImage \"docker.io/volcanosh/vc-scheduler:v1.13.0@sha256:b05b30b3c25eff5af77e1859f47fc6acfc3520d62dc2838f0623aa4309c40b34\" returns image reference \"sha256:34941828a2b362c9455a3c1c1fc85208c1ff8c984ebb01674cc6a92d8aa787ef\""
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.402417452Z" level=info msg="CreateContainer within sandbox \"c42f5288930af9e3b3395801fe051cede54286dbe8a56c72107dd95bf1b8d26c\" for container &ContainerMetadata{Name:volcano-scheduler,Attempt:4,}"
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.423702613Z" level=info msg="CreateContainer within sandbox \"c42f5288930af9e3b3395801fe051cede54286dbe8a56c72107dd95bf1b8d26c\" for &ContainerMetadata{Name:volcano-scheduler,Attempt:4,} returns container id \"7ea6ef3ed646900ac79fab88f79761fab37ee027603c9c8f329deaae26d14a29\""
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.424459941Z" level=info msg="StartContainer for \"7ea6ef3ed646900ac79fab88f79761fab37ee027603c9c8f329deaae26d14a29\""
	Oct 02 06:54:46 addons-110926 containerd[753]: time="2025-10-02T06:54:46.506147362Z" level=info msg="StartContainer for \"7ea6ef3ed646900ac79fab88f79761fab37ee027603c9c8f329deaae26d14a29\" returns successfully"
	
	
	==> coredns [d607380a0ea95122f5da6e25cf2168aa3ea1ff11f2efdf89f4a8c2d0e5150d23] <==
	[INFO] 10.244.0.10:50787 - 5465 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000149649s
	[INFO] 10.244.0.10:50787 - 46628 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001650167s
	[INFO] 10.244.0.10:50787 - 30802 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001101513s
	[INFO] 10.244.0.10:50787 - 65077 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000141051s
	[INFO] 10.244.0.10:50787 - 27995 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000170818s
	[INFO] 10.244.0.10:57105 - 40777 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000193627s
	[INFO] 10.244.0.10:57105 - 44741 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000111316s
	[INFO] 10.244.0.10:57105 - 25201 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093429s
	[INFO] 10.244.0.10:57105 - 38571 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084034s
	[INFO] 10.244.0.10:57105 - 24208 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076166s
	[INFO] 10.244.0.10:57105 - 56789 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000139811s
	[INFO] 10.244.0.10:57105 - 46307 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001361429s
	[INFO] 10.244.0.10:57105 - 10819 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.000882336s
	[INFO] 10.244.0.10:57105 - 62476 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092289s
	[INFO] 10.244.0.10:57105 - 29096 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.0000767s
	[INFO] 10.244.0.10:43890 - 1641 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123016s
	[INFO] 10.244.0.10:43890 - 1411 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136259s
	[INFO] 10.244.0.10:42249 - 55738 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014663s
	[INFO] 10.244.0.10:42249 - 56025 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119479s
	[INFO] 10.244.0.10:58600 - 45308 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118355s
	[INFO] 10.244.0.10:58600 - 45497 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012044s
	[INFO] 10.244.0.10:58816 - 38609 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0013196s
	[INFO] 10.244.0.10:58816 - 38806 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001169622s
	[INFO] 10.244.0.10:53569 - 36791 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135397s
	[INFO] 10.244.0.10:53569 - 36387 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116156s
	
	
	==> describe nodes <==
	Name:               addons-110926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-110926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-110926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-110926
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-110926"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:37:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-110926
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 06:55:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 06:54:11 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 06:54:11 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 06:54:11 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 06:54:11 +0000   Thu, 02 Oct 2025 06:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-110926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 852f460d42254382a140bbeecb584248
	  System UUID:                c6ea63c0-97bd-4894-b738-fecc8ba127ac
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-85f6b7fc65-zwxnx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  gadget                      gadget-5sxf6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  gcp-auth                    gcp-auth-78565c9fb4-7q2v6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-srz99    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         17m
	  kube-system                 coredns-66bc5c9577-s68lt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpathplugin-mg6q4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-addons-110926                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         18m
	  kube-system                 kindnet-zb4h8                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-addons-110926                250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-addons-110926       200m (10%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-4zvzf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-110926                100m (5%)     0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 metrics-server-85b7d694d7-fg8z6             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         17m
	  kube-system                 nvidia-device-plugin-daemonset-pptng        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-66898fdd98-926mp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-creds-764b6fb674-s7sx5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 registry-proxy-bqxnl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-7d9fbc56b8-69zvz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-7d9fbc56b8-xwmkw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          local-path-provisioner-648f6765c9-xvgcs     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  volcano-system              volcano-admission-6c447bd768-vbfkj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  volcano-system              volcano-admission-init-4gc9b                0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  volcano-system              volcano-controllers-6fd4f85cb8-gszhm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  volcano-system              volcano-scheduler-76c996c8bf-jt89z          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  yakd-dashboard              yakd-dashboard-5ff678cb9-22kn4              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  18m                kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m                kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m                kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                node-controller  Node addons-110926 event: Registered Node addons-110926 in Controller
	  Normal   NodeReady                17m                kubelet          Node addons-110926 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8be3089b4391b68797b9ff88ff2b0c3043e3281ca30bcb48a82169b26fb4081d] <==
	{"level":"warn","ts":"2025-10-02T06:37:06.446704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:06.457145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:06.572852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:24.126822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:24.150001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.461328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.478170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.495681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.527887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.548456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.563248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.624874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.689649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.719544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.736879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.755892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.770836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.790478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:39:30.364216Z","caller":"traceutil/trace.go:172","msg":"trace[134372730] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"123.100543ms","start":"2025-10-02T06:39:30.241102Z","end":"2025-10-02T06:39:30.364202Z","steps":["trace[134372730] 'process raft request'  (duration: 122.981302ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T06:47:04.880918Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1925}
	{"level":"info","ts":"2025-10-02T06:47:04.920117Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1925,"took":"38.610713ms","hash":2612370864,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5120000,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2025-10-02T06:47:04.920180Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2612370864,"revision":1925,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T06:52:04.887964Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2405}
	{"level":"info","ts":"2025-10-02T06:52:04.907361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2405,"took":"18.449885ms","hash":1927945438,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":3727360,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-10-02T06:52:04.907428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1927945438,"revision":2405,"compact-revision":1925}
	
	
	==> gcp-auth [f3674d941cc54226d882b3c2ad8cb873099df7e0a7a52c98ccfe51985b3175b0] <==
	2025/10/02 06:38:32 GCP Auth Webhook started!
	
	
	==> kernel <==
	 06:55:11 up  6:37,  0 user,  load average: 0.80, 0.55, 1.39
	Linux addons-110926 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [001c4797204fc8489af667e5dc44dc2de85bde6fbbb94189af8eaa6e51b826b8] <==
	I1002 06:53:06.730850       1 main.go:301] handling current node
	I1002 06:53:16.722418       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:53:16.722450       1 main.go:301] handling current node
	I1002 06:53:26.729749       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:53:26.729785       1 main.go:301] handling current node
	I1002 06:53:36.729016       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:53:36.729060       1 main.go:301] handling current node
	I1002 06:53:46.725459       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:53:46.725496       1 main.go:301] handling current node
	I1002 06:53:56.723435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:53:56.723470       1 main.go:301] handling current node
	I1002 06:54:06.726172       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:54:06.726211       1 main.go:301] handling current node
	I1002 06:54:16.722408       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:54:16.722440       1 main.go:301] handling current node
	I1002 06:54:26.727803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:54:26.728025       1 main.go:301] handling current node
	I1002 06:54:36.724883       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:54:36.724927       1 main.go:301] handling current node
	I1002 06:54:46.725502       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:54:46.725551       1 main.go:301] handling current node
	I1002 06:54:56.726152       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:54:56.726191       1 main.go:301] handling current node
	I1002 06:55:06.728616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:55:06.728655       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7d5d1641aee0712674398096e96919d3b125a32fedea7425f03406a609a25f01] <==
	W1002 06:53:54.678720       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:46.545640       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:47.551217       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:48.642653       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:49.720177       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:50.803538       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:51.829300       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:52.851782       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:53.901589       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:54.962328       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:55.980064       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:57.037608       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:58.096911       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:54:59.188244       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:00.267931       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:01.274856       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:02.318315       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:03.359744       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:04.363365       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:05.366577       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:06.388018       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:07.448000       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:08.504867       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:09.547318       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	W1002 06:55:10.587124       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.162.32:443: connect: connection refused
	
	
	==> kube-controller-manager [dd74ed9d21ed14fc6778ffc7add04a70910ec955742f31d4442b2c07c8ea86db] <==
	I1002 06:37:14.471086       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 06:37:14.471264       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 06:37:14.471376       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 06:37:14.466793       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 06:37:14.471939       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 06:37:14.474885       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:37:14.480491       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 06:37:14.492818       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	E1002 06:37:20.359280       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1002 06:37:44.437062       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:37:44.437310       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch.volcano.sh"
	I1002 06:37:44.437375       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podgroups.scheduling.volcano.sh"
	I1002 06:37:44.437428       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1002 06:37:44.437497       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch.volcano.sh"
	I1002 06:37:44.437530       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobtemplates.flow.volcano.sh"
	I1002 06:37:44.437626       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobflows.flow.volcano.sh"
	I1002 06:37:44.437726       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="commands.bus.volcano.sh"
	I1002 06:37:44.437890       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1002 06:37:44.465491       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1002 06:37:44.473429       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1002 06:37:45.738000       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 06:37:45.874149       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 06:37:59.423860       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1002 06:38:15.743511       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1002 06:38:15.882861       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [205ba78bdcdf484d8af0d0330d3a99ba39bdc20efa19428202c6c4cd7dfd9d33] <==
	I1002 06:37:16.426570       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:37:16.498503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:37:16.599091       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:37:16.599151       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:37:16.599225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:37:16.664219       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:37:16.664277       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:37:16.670034       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:37:16.670375       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:37:16.670399       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:37:16.671951       1 config.go:200] "Starting service config controller"
	I1002 06:37:16.671975       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:37:16.671996       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:37:16.672007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:37:16.672023       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:37:16.672032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:37:16.676259       1 config.go:309] "Starting node config controller"
	I1002 06:37:16.676302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:37:16.676311       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:37:16.772116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:37:16.772157       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:37:16.772192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b56ea6dbe0e218561ee35e4169c6c63e3160ecf828f68ed8b40ef0285f668b5e] <==
	I1002 06:37:08.294088       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:37:08.297839       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 06:37:08.298569       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:37:08.301736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 06:37:08.302088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:37:08.302287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1002 06:37:08.298598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 06:37:08.303874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:37:08.304074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:37:08.304269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:37:08.304471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:37:08.308085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 06:37:08.317169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:37:08.317571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:37:08.317827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:37:08.317882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:37:08.317917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:37:08.317998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:37:08.318060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:37:08.325459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:37:08.325531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:37:08.325571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:37:08.325620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:37:08.325676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1002 06:37:09.602936       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 06:54:09 addons-110926 kubelet[1456]: E1002 06:54:09.060363    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-4gc9b" podUID="ab8df835-889d-49b2-8c5c-8a28a0b3b0ba"
	Oct 02 06:54:15 addons-110926 kubelet[1456]: E1002 06:54:15.245019    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 06:54:17 addons-110926 kubelet[1456]: I1002 06:54:17.243626    1456 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bqxnl" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:54:19 addons-110926 kubelet[1456]: E1002 06:54:19.358080    1456 secret.go:189] Couldn't get secret volcano-system/volcano-admission-secret: secret "volcano-admission-secret" not found
	Oct 02 06:54:19 addons-110926 kubelet[1456]: E1002 06:54:19.358181    1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16519b81-022a-4e88-828f-109cc81af16b-admission-certs podName:16519b81-022a-4e88-828f-109cc81af16b nodeName:}" failed. No retries permitted until 2025-10-02 06:56:21.358163123 +0000 UTC m=+1151.257366718 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "admission-certs" (UniqueName: "kubernetes.io/secret/16519b81-022a-4e88-828f-109cc81af16b-admission-certs") pod "volcano-admission-6c447bd768-vbfkj" (UID: "16519b81-022a-4e88-828f-109cc81af16b") : secret "volcano-admission-secret" not found
	Oct 02 06:54:19 addons-110926 kubelet[1456]: E1002 06:54:19.358089    1456 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 06:54:19 addons-110926 kubelet[1456]: E1002 06:54:19.358622    1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b84bec7-8d9d-4d30-9860-3d491871c922-gcr-creds podName:0b84bec7-8d9d-4d30-9860-3d491871c922 nodeName:}" failed. No retries permitted until 2025-10-02 06:56:21.35860699 +0000 UTC m=+1151.257810577 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0b84bec7-8d9d-4d30-9860-3d491871c922-gcr-creds") pod "registry-creds-764b6fb674-s7sx5" (UID: "0b84bec7-8d9d-4d30-9860-3d491871c922") : secret "registry-creds-gcr" not found
	Oct 02 06:54:20 addons-110926 kubelet[1456]: E1002 06:54:20.245600    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-4gc9b" podUID="ab8df835-889d-49b2-8c5c-8a28a0b3b0ba"
	Oct 02 06:54:21 addons-110926 kubelet[1456]: I1002 06:54:21.243768    1456 scope.go:117] "RemoveContainer" containerID="84db3aa9fdfd8f289b52c369f3ecab93ed1e94bc3af0312bf2bca27328b2d0a4"
	Oct 02 06:54:21 addons-110926 kubelet[1456]: E1002 06:54:21.244004    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=volcano-scheduler pod=volcano-scheduler-76c996c8bf-jt89z_volcano-system(14e70d56-3e47-4d75-91b5-fda07d412971)\"" pod="volcano-system/volcano-scheduler-76c996c8bf-jt89z" podUID="14e70d56-3e47-4d75-91b5-fda07d412971"
	Oct 02 06:54:30 addons-110926 kubelet[1456]: E1002 06:54:30.661156    1456 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Oct 02 06:54:30 addons-110926 kubelet[1456]: E1002 06:54:30.661219    1456 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Oct 02 06:54:30 addons-110926 kubelet[1456]: E1002 06:54:30.661319    1456 kuberuntime_manager.go:1449] "Unhandled Error" err="container minikube-ingress-dns start failed in pod kube-ingress-dns-minikube_kube-system(ef8b2745-553d-44a6-984e-b4ab801f79f7): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 06:54:30 addons-110926 kubelet[1456]: E1002 06:54:30.661362    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 06:54:31 addons-110926 kubelet[1456]: E1002 06:54:31.243830    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-4gc9b" podUID="ab8df835-889d-49b2-8c5c-8a28a0b3b0ba"
	Oct 02 06:54:33 addons-110926 kubelet[1456]: I1002 06:54:33.243480    1456 scope.go:117] "RemoveContainer" containerID="84db3aa9fdfd8f289b52c369f3ecab93ed1e94bc3af0312bf2bca27328b2d0a4"
	Oct 02 06:54:33 addons-110926 kubelet[1456]: E1002 06:54:33.243718    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"volcano-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=volcano-scheduler pod=volcano-scheduler-76c996c8bf-jt89z_volcano-system(14e70d56-3e47-4d75-91b5-fda07d412971)\"" pod="volcano-system/volcano-scheduler-76c996c8bf-jt89z" podUID="14e70d56-3e47-4d75-91b5-fda07d412971"
	Oct 02 06:54:41 addons-110926 kubelet[1456]: E1002 06:54:41.246064    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 06:54:43 addons-110926 kubelet[1456]: E1002 06:54:43.244855    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-4gc9b" podUID="ab8df835-889d-49b2-8c5c-8a28a0b3b0ba"
	Oct 02 06:54:46 addons-110926 kubelet[1456]: I1002 06:54:46.245988    1456 scope.go:117] "RemoveContainer" containerID="84db3aa9fdfd8f289b52c369f3ecab93ed1e94bc3af0312bf2bca27328b2d0a4"
	Oct 02 06:54:52 addons-110926 kubelet[1456]: E1002 06:54:52.245291    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 06:54:56 addons-110926 kubelet[1456]: E1002 06:54:56.244342    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-4gc9b" podUID="ab8df835-889d-49b2-8c5c-8a28a0b3b0ba"
	Oct 02 06:54:57 addons-110926 kubelet[1456]: I1002 06:54:57.243990    1456 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-s68lt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 06:55:05 addons-110926 kubelet[1456]: E1002 06:55:05.244831    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 06:55:08 addons-110926 kubelet[1456]: E1002 06:55:08.244053    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"main\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/volcanosh/vc-webhook-manager:v1.13.0@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/volcanosh/vc-webhook-manager@sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/volcanosh/vc-webhook-manager/manifests/sha256:03e36eb220766397b4cd9c2f42bab8666661a0112fa9033ae9bd80d2a9611001: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="volcano-system/volcano-admission-init-4gc9b" podUID="ab8df835-889d-49b2-8c5c-8a28a0b3b0ba"
	
	
	==> storage-provisioner [4829c9264d5b3ae1fc764ede230e33d7252374c2ec8cd6385777a58debef5783] <==
	W1002 06:54:46.493218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:48.496656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:48.503918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:50.506954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:50.513852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:52.516721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:52.523304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:54.526414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:54.533711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:56.537384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:56.542029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:58.546301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:54:58.551638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:00.555773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:00.564005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:02.566873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:02.571847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:04.575721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:04.582827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:06.586828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:06.591876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:08.595456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:08.602341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:10.607601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 06:55:10.614914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-110926 -n addons-110926
helpers_test.go:269: (dbg) Run:  kubectl --context addons-110926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 volcano-admission-6c447bd768-vbfkj volcano-admission-init-4gc9b
helpers_test.go:282: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-110926 describe pod ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 volcano-admission-6c447bd768-vbfkj volcano-admission-init-4gc9b
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-110926 describe pod ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 volcano-admission-6c447bd768-vbfkj volcano-admission-init-4gc9b: exit status 1 (95.41912ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lw8gl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bq878" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-s7sx5" not found
	Error from server (NotFound): pods "volcano-admission-6c447bd768-vbfkj" not found
	Error from server (NotFound): pods "volcano-admission-init-4gc9b" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-110926 describe pod ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 volcano-admission-6c447bd768-vbfkj volcano-admission-init-4gc9b: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 addons disable volcano --alsologtostderr -v=1: (12.000430161s)
--- FAIL: TestAddons/serial/Volcano (719.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (492.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-110926 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-110926 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-110926 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [e5b2eab0-6492-4ef7-830a-22a929549537] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-110926 -n addons-110926
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-02 07:10:37.067256398 +0000 UTC m=+2067.954603735
addons_test.go:252: (dbg) Run:  kubectl --context addons-110926 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-110926 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-110926/192.168.49.2
Start Time:       Thu, 02 Oct 2025 07:02:36 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hqjrl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hqjrl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-110926
Warning  Failed     7m46s                   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m59s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     4m59s (x4 over 8m)      kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m59s (x5 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     2m55s (x20 over 7m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m43s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-110926 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-110926 logs nginx -n default: exit status 1 (97.098179ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-110926 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-110926
helpers_test.go:243: (dbg) docker inspect addons-110926:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	        "Created": "2025-10-02T06:36:47.077600034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 814321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:36:47.138474038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hosts",
	        "LogPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d-json.log",
	        "Name": "/addons-110926",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-110926:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-110926",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	                "LowerDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-110926",
	                "Source": "/var/lib/docker/volumes/addons-110926/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-110926",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-110926",
	                "name.minikube.sigs.k8s.io": "addons-110926",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e03dfd9e44981225a70f6640c6b12a48805938cfdd54b566df7bddffa824b2d",
	            "SandboxKey": "/var/run/docker/netns/6e03dfd9e449",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-110926": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:3c:a1:2d:84:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c2d471fc3c60a7f5a83ca737cf0a22c0c0076227d91a7e348867826280521af7",
	                    "EndpointID": "885b90e051ad80837eb5c6d3c161821bbf8a3c111f24b170e0bc233d0690c448",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-110926",
	                        "e88a06110ea1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-110926 -n addons-110926
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 logs -n 25: (1.286049999s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-492765                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-547243                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p download-docker-533728 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p download-docker-533728                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p binary-mirror-704812 --alsologtostderr --binary-mirror http://127.0.0.1:37961 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p binary-mirror-704812                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ addons  │ enable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ start   │ -p addons-110926 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:43 UTC │
	│ addons  │ addons-110926 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ ip      │ addons-110926 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │ 02 Oct 25 06:56 UTC │
	│ addons  │ addons-110926 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-110926 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ enable headlamp -p addons-110926 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-110926 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-110926 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-110926 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-110926 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-110926 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons  │ addons-110926 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:36:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:36:21.580334  813918 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:36:21.580482  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580492  813918 out.go:374] Setting ErrFile to fd 2...
	I1002 06:36:21.580497  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580834  813918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 06:36:21.581311  813918 out.go:368] Setting JSON to false
	I1002 06:36:21.582265  813918 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22731,"bootTime":1759364251,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 06:36:21.582336  813918 start.go:140] virtualization:  
	I1002 06:36:21.585831  813918 out.go:179] * [addons-110926] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:36:21.589067  813918 notify.go:220] Checking for updates...
	I1002 06:36:21.589658  813918 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:36:21.592579  813918 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:36:21.595634  813918 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:36:21.598400  813918 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 06:36:21.601243  813918 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:36:21.604214  813918 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:36:21.607495  813918 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:36:21.629855  813918 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:36:21.629989  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.693096  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.683464105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.693212  813918 docker.go:318] overlay module found
	I1002 06:36:21.698158  813918 out.go:179] * Using the docker driver based on user configuration
	I1002 06:36:21.700959  813918 start.go:304] selected driver: docker
	I1002 06:36:21.700986  813918 start.go:924] validating driver "docker" against <nil>
	I1002 06:36:21.701000  813918 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:36:21.701711  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.758634  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.749346343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.758811  813918 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:36:21.759085  813918 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:36:21.762043  813918 out.go:179] * Using Docker driver with root privileges
	I1002 06:36:21.764916  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:21.764987  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:21.765005  813918 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:36:21.765078  813918 start.go:348] cluster config:
	{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:21.768148  813918 out.go:179] * Starting "addons-110926" primary control-plane node in "addons-110926" cluster
	I1002 06:36:21.771007  813918 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 06:36:21.773962  813918 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:36:21.776817  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:21.776869  813918 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 06:36:21.776883  813918 cache.go:58] Caching tarball of preloaded images
	I1002 06:36:21.776920  813918 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:36:21.776978  813918 preload.go:233] Found /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 06:36:21.776988  813918 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 06:36:21.777328  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:21.777357  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json: {Name:mk2f8f9458f5bc5a3d522cc7bc03c497073f8f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:21.792651  813918 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:36:21.792805  813918 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:36:21.792830  813918 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:36:21.792839  813918 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:36:21.792848  813918 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:36:21.792856  813918 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 06:36:39.840628  813918 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 06:36:39.840677  813918 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:36:39.840706  813918 start.go:360] acquireMachinesLock for addons-110926: {Name:mk5b3ba2eb8943c76c6ef867a9f0efe000290e8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:36:39.840853  813918 start.go:364] duration metric: took 124.262µs to acquireMachinesLock for "addons-110926"
	I1002 06:36:39.840884  813918 start.go:93] Provisioning new machine with config: &{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:36:39.840959  813918 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:36:39.844345  813918 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 06:36:39.844567  813918 start.go:159] libmachine.API.Create for "addons-110926" (driver="docker")
	I1002 06:36:39.844615  813918 client.go:168] LocalClient.Create starting
	I1002 06:36:39.844744  813918 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem
	I1002 06:36:40.158293  813918 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem
	I1002 06:36:40.423695  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:36:40.439045  813918 cli_runner.go:211] docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:36:40.439144  813918 network_create.go:284] running [docker network inspect addons-110926] to gather additional debugging logs...
	I1002 06:36:40.439166  813918 cli_runner.go:164] Run: docker network inspect addons-110926
	W1002 06:36:40.454853  813918 cli_runner.go:211] docker network inspect addons-110926 returned with exit code 1
	I1002 06:36:40.454885  813918 network_create.go:287] error running [docker network inspect addons-110926]: docker network inspect addons-110926: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-110926 not found
	I1002 06:36:40.454900  813918 network_create.go:289] output of [docker network inspect addons-110926]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-110926 not found
	
	** /stderr **
	I1002 06:36:40.454994  813918 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:40.471187  813918 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3c190}
	I1002 06:36:40.471239  813918 network_create.go:124] attempt to create docker network addons-110926 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:36:40.471291  813918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-110926 addons-110926
	I1002 06:36:40.528426  813918 network_create.go:108] docker network addons-110926 192.168.49.0/24 created
	I1002 06:36:40.528461  813918 kic.go:121] calculated static IP "192.168.49.2" for the "addons-110926" container
	I1002 06:36:40.528550  813918 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:36:40.544507  813918 cli_runner.go:164] Run: docker volume create addons-110926 --label name.minikube.sigs.k8s.io=addons-110926 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:36:40.560870  813918 oci.go:103] Successfully created a docker volume addons-110926
	I1002 06:36:40.560961  813918 cli_runner.go:164] Run: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:36:42.684275  813918 cli_runner.go:217] Completed: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.123276184s)
	I1002 06:36:42.684309  813918 oci.go:107] Successfully prepared a docker volume addons-110926
	I1002 06:36:42.684338  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:42.684360  813918 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:36:42.684441  813918 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:36:47.011851  813918 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.327364513s)
	I1002 06:36:47.011897  813918 kic.go:203] duration metric: took 4.327533581s to extract preloaded images to volume ...
	W1002 06:36:47.012040  813918 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 06:36:47.012157  813918 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:36:47.062619  813918 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-110926 --name addons-110926 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-110926 --network addons-110926 --ip 192.168.49.2 --volume addons-110926:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:36:47.379291  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Running}}
	I1002 06:36:47.400798  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.426150  813918 cli_runner.go:164] Run: docker exec addons-110926 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:36:47.477926  813918 oci.go:144] the created container "addons-110926" has a running status.
	I1002 06:36:47.477953  813918 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa...
	I1002 06:36:47.781138  813918 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:36:47.806163  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.827180  813918 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:36:47.827199  813918 kic_runner.go:114] Args: [docker exec --privileged addons-110926 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:36:47.891791  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.911592  813918 machine.go:93] provisionDockerMachine start ...
	I1002 06:36:47.911695  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:47.930991  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:47.931327  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:47.931345  813918 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:36:47.931960  813918 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57194->127.0.0.1:33863: read: connection reset by peer
	I1002 06:36:51.072477  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.072569  813918 ubuntu.go:182] provisioning hostname "addons-110926"
	I1002 06:36:51.072685  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.090401  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.090720  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.090740  813918 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-110926 && echo "addons-110926" | sudo tee /etc/hostname
	I1002 06:36:51.236050  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.236138  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.258063  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.258373  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.258395  813918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-110926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-110926/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-110926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:36:51.388860  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:36:51.388887  813918 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-811293/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-811293/.minikube}
	I1002 06:36:51.388910  813918 ubuntu.go:190] setting up certificates
	I1002 06:36:51.388920  813918 provision.go:84] configureAuth start
	I1002 06:36:51.388983  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.405357  813918 provision.go:143] copyHostCerts
	I1002 06:36:51.405461  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem (1123 bytes)
	I1002 06:36:51.405586  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem (1679 bytes)
	I1002 06:36:51.405650  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem (1078 bytes)
	I1002 06:36:51.405711  813918 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem org=jenkins.addons-110926 san=[127.0.0.1 192.168.49.2 addons-110926 localhost minikube]
	I1002 06:36:51.612527  813918 provision.go:177] copyRemoteCerts
	I1002 06:36:51.612597  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:36:51.612649  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.629460  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.725298  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:36:51.743050  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:36:51.760643  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:36:51.777747  813918 provision.go:87] duration metric: took 388.803174ms to configureAuth
	I1002 06:36:51.777772  813918 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:36:51.777954  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:36:51.777961  813918 machine.go:96] duration metric: took 3.866353513s to provisionDockerMachine
	I1002 06:36:51.777968  813918 client.go:171] duration metric: took 11.933342699s to LocalClient.Create
	I1002 06:36:51.777991  813918 start.go:167] duration metric: took 11.933425856s to libmachine.API.Create "addons-110926"
	I1002 06:36:51.778000  813918 start.go:293] postStartSetup for "addons-110926" (driver="docker")
	I1002 06:36:51.778009  813918 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:36:51.778057  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:36:51.778100  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.794568  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.888438  813918 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:36:51.891559  813918 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:36:51.891587  813918 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:36:51.891598  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/addons for local assets ...
	I1002 06:36:51.891662  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/files for local assets ...
	I1002 06:36:51.891684  813918 start.go:296] duration metric: took 113.678581ms for postStartSetup
	I1002 06:36:51.891998  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.908094  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:51.908374  813918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:36:51.908417  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.924432  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.017816  813918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:36:52.022845  813918 start.go:128] duration metric: took 12.181870526s to createHost
	I1002 06:36:52.022873  813918 start.go:83] releasing machines lock for "addons-110926", held for 12.182006857s
	I1002 06:36:52.022950  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:52.040319  813918 ssh_runner.go:195] Run: cat /version.json
	I1002 06:36:52.040381  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.040643  813918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:36:52.040709  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.064673  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.078579  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.168362  813918 ssh_runner.go:195] Run: systemctl --version
	I1002 06:36:52.263150  813918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:36:52.267928  813918 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:36:52.267998  813918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:36:52.294529  813918 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 06:36:52.294574  813918 start.go:495] detecting cgroup driver to use...
	I1002 06:36:52.294607  813918 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 06:36:52.294670  813918 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 06:36:52.309592  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 06:36:52.322252  813918 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:36:52.322343  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:36:52.339306  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:36:52.357601  813918 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:36:52.498437  813918 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:36:52.636139  813918 docker.go:234] disabling docker service ...
	I1002 06:36:52.636222  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:36:52.659149  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:36:52.672149  813918 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:36:52.790045  813918 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:36:52.904510  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:36:52.917512  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:36:52.931680  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 06:36:52.940606  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 06:36:52.949651  813918 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 06:36:52.949722  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 06:36:52.958437  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.967122  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 06:36:52.975524  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.984274  813918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:36:52.992118  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 06:36:53.000891  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 06:36:53.011203  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 06:36:53.020137  813918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:36:53.027434  813918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:36:53.034538  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.146732  813918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 06:36:53.259109  813918 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 06:36:53.259213  813918 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 06:36:53.262865  813918 start.go:563] Will wait 60s for crictl version
	I1002 06:36:53.262951  813918 ssh_runner.go:195] Run: which crictl
	I1002 06:36:53.266209  813918 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:36:53.294330  813918 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 06:36:53.294471  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.317070  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.342544  813918 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 06:36:53.345439  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:53.361595  813918 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:36:53.365182  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.374561  813918 kubeadm.go:883] updating cluster {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:36:53.374681  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:53.374737  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.399251  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.399274  813918 containerd.go:534] Images already preloaded, skipping extraction
	I1002 06:36:53.399339  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.423479  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.423504  813918 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:36:53.423513  813918 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1002 06:36:53.423602  813918 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-110926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:36:53.423672  813918 ssh_runner.go:195] Run: sudo crictl info
	I1002 06:36:53.448450  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:53.448474  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:53.448496  813918 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:36:53.448523  813918 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-110926 NodeName:addons-110926 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:36:53.448665  813918 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-110926"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:36:53.448861  813918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:36:53.457671  813918 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:36:53.457745  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:36:53.466514  813918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1002 06:36:53.480222  813918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:36:53.492979  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1002 06:36:53.506618  813918 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:36:53.510443  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.519937  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.633003  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:36:53.653268  813918 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926 for IP: 192.168.49.2
	I1002 06:36:53.653291  813918 certs.go:195] generating shared ca certs ...
	I1002 06:36:53.653331  813918 certs.go:227] acquiring lock for ca certs: {Name:mk33b75296d4c02eee9bab3e9582ce8896a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:53.654149  813918 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key
	I1002 06:36:54.554249  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt ...
	I1002 06:36:54.554277  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt: {Name:mk2139057332209b98dbb746fb9a256d2b754164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554459  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key ...
	I1002 06:36:54.554470  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key: {Name:mkcae11ed523222e33231ecbd86e12b64a288b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554546  813918 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key
	I1002 06:36:54.895364  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt ...
	I1002 06:36:54.895399  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt: {Name:mke2bb76dd7b81d2d26af5e116b652209f0542b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.895600  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key ...
	I1002 06:36:54.895614  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key: {Name:mkc32897a4730ab5fb973fb69d1a38ca87d85c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.896344  813918 certs.go:257] generating profile certs ...
	I1002 06:36:54.896423  813918 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key
	I1002 06:36:54.896442  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt with IP's: []
	I1002 06:36:55.419216  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt ...
	I1002 06:36:55.419259  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: {Name:mk10e15791cbf0b0edd868b4fdb8e230e5e309e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419452  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key ...
	I1002 06:36:55.419466  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key: {Name:mk9f0a92cebc1827b3a9e95b7f53c1d4b6a59638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419563  813918 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549
	I1002 06:36:55.419584  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:36:55.722878  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 ...
	I1002 06:36:55.722908  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549: {Name:mk85eea21d417032742d45805e5f307e924f0055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.723654  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 ...
	I1002 06:36:55.723671  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549: {Name:mkf298fb25e09f690a5e28cc66f4a6b37f67e15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.724361  813918 certs.go:382] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt
	I1002 06:36:55.724446  813918 certs.go:386] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key
	I1002 06:36:55.724499  813918 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key
	I1002 06:36:55.724522  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt with IP's: []
	I1002 06:36:56.363048  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt ...
	I1002 06:36:56.363081  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt: {Name:mk4c25ab58ebf52954efb245b3c0c0d9e1c6bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.363911  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key ...
	I1002 06:36:56.363932  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key: {Name:mk7f28565479e9a862d5049acbcab89444bf5a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.364713  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:36:56.364779  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:36:56.364814  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:36:56.364842  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem (1679 bytes)
	I1002 06:36:56.365421  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:36:56.384138  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 06:36:56.402907  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:36:56.420429  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:36:56.438118  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:36:56.455787  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:36:56.473374  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:36:56.490901  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:36:56.509097  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:36:56.526744  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:36:56.539426  813918 ssh_runner.go:195] Run: openssl version
	I1002 06:36:56.545473  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:36:56.553848  813918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557589  813918 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557674  813918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.599790  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:36:56.608153  813918 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:36:56.611552  813918 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:36:56.611600  813918 kubeadm.go:400] StartCluster: {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:56.611680  813918 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 06:36:56.611736  813918 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:36:56.639982  813918 cri.go:89] found id: ""
	I1002 06:36:56.640052  813918 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:36:56.647729  813918 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:36:56.655474  813918 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:36:56.655568  813918 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:36:56.663121  813918 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:36:56.663142  813918 kubeadm.go:157] found existing configuration files:
	
	I1002 06:36:56.663221  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:36:56.670874  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:36:56.670972  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:36:56.678534  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:36:56.685938  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:36:56.685996  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:36:56.692708  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.699925  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:36:56.700015  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.707153  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:36:56.714621  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:36:56.714749  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:36:56.722338  813918 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:36:56.759248  813918 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:36:56.759571  813918 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:36:56.790582  813918 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:36:56.790657  813918 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 06:36:56.790699  813918 kubeadm.go:318] OS: Linux
	I1002 06:36:56.790763  813918 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:36:56.790820  813918 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 06:36:56.790875  813918 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:36:56.790936  813918 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:36:56.790994  813918 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:36:56.791049  813918 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:36:56.791100  813918 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:36:56.791153  813918 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:36:56.791207  813918 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 06:36:56.880850  813918 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:36:56.880966  813918 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:36:56.881067  813918 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:36:56.886790  813918 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:36:56.890544  813918 out.go:252]   - Generating certificates and keys ...
	I1002 06:36:56.890681  813918 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:36:56.890776  813918 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:36:57.277686  813918 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:36:57.698690  813918 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:36:58.123771  813918 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:36:58.316428  813918 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:36:58.712844  813918 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:36:58.713106  813918 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.412304  813918 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:36:59.412590  813918 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.506243  813918 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:37:00.458571  813918 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:37:00.702742  813918 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:37:00.703124  813918 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:37:01.245158  813918 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:37:01.470802  813918 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:37:01.723353  813918 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:37:01.786251  813918 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:37:02.286866  813918 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:37:02.287602  813918 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:37:02.290493  813918 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:37:02.293946  813918 out.go:252]   - Booting up control plane ...
	I1002 06:37:02.294063  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:37:02.294988  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:37:02.295992  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:37:02.312503  813918 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:37:02.312871  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:37:02.320595  813918 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:37:02.321016  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:37:02.321262  813918 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:37:02.457350  813918 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:37:02.457522  813918 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:37:03.461255  813918 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00198836s
	I1002 06:37:03.463308  813918 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:37:03.463532  813918 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:37:03.463645  813918 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:37:03.464191  813918 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:37:06.566691  813918 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.102303507s
	I1002 06:37:08.316492  813918 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.851816452s
	I1002 06:37:09.465139  813918 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001507743s
	I1002 06:37:09.489317  813918 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:37:09.522458  813918 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:37:09.556453  813918 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:37:09.556687  813918 kubeadm.go:318] [mark-control-plane] Marking the node addons-110926 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:37:09.572399  813918 kubeadm.go:318] [bootstrap-token] Using token: 7g41rx.fb6mqimdeeyoknq9
	I1002 06:37:09.575450  813918 out.go:252]   - Configuring RBAC rules ...
	I1002 06:37:09.575583  813918 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:37:09.580181  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:37:09.588090  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:37:09.592801  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:37:09.600582  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:37:09.607878  813918 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:37:09.872917  813918 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:37:10.299814  813918 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:37:10.872732  813918 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:37:10.874055  813918 kubeadm.go:318] 
	I1002 06:37:10.874135  813918 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:37:10.874146  813918 kubeadm.go:318] 
	I1002 06:37:10.874227  813918 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:37:10.874248  813918 kubeadm.go:318] 
	I1002 06:37:10.874283  813918 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:37:10.874350  813918 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:37:10.874409  813918 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:37:10.874417  813918 kubeadm.go:318] 
	I1002 06:37:10.874473  813918 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:37:10.874482  813918 kubeadm.go:318] 
	I1002 06:37:10.874532  813918 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:37:10.874540  813918 kubeadm.go:318] 
	I1002 06:37:10.874595  813918 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:37:10.874679  813918 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:37:10.874756  813918 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:37:10.874764  813918 kubeadm.go:318] 
	I1002 06:37:10.874852  813918 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:37:10.874936  813918 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:37:10.874945  813918 kubeadm.go:318] 
	I1002 06:37:10.875033  813918 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875146  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 \
	I1002 06:37:10.875172  813918 kubeadm.go:318] 	--control-plane 
	I1002 06:37:10.875181  813918 kubeadm.go:318] 
	I1002 06:37:10.875270  813918 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:37:10.875279  813918 kubeadm.go:318] 
	I1002 06:37:10.875365  813918 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875475  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 
	I1002 06:37:10.878324  813918 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 06:37:10.878562  813918 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 06:37:10.878676  813918 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:37:10.878697  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:37:10.878705  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:37:10.881877  813918 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 06:37:10.884817  813918 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 06:37:10.889466  813918 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 06:37:10.889488  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 06:37:10.902465  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 06:37:11.181141  813918 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:37:11.181229  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.181309  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-110926 minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-110926 minikube.k8s.io/primary=true
	I1002 06:37:11.362613  813918 ops.go:34] apiserver oom_adj: -16
	I1002 06:37:11.362717  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.863387  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.363462  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.863468  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.362840  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.863815  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.363244  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.495136  813918 kubeadm.go:1113] duration metric: took 3.313961954s to wait for elevateKubeSystemPrivileges
	I1002 06:37:14.495171  813918 kubeadm.go:402] duration metric: took 17.883574483s to StartCluster
	I1002 06:37:14.495189  813918 settings.go:142] acquiring lock: {Name:mkfabb257d5e6dc89516b7f3eecfb5ad470245b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.495908  813918 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:37:14.496318  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/kubeconfig: {Name:mk61b1a16c6c070d43ba1e4fed7f7f8861077db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.497144  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:37:14.497165  813918 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:37:14.497416  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.497447  813918 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:37:14.497542  813918 addons.go:69] Setting yakd=true in profile "addons-110926"
	I1002 06:37:14.497556  813918 addons.go:238] Setting addon yakd=true in "addons-110926"
	I1002 06:37:14.497579  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.497665  813918 addons.go:69] Setting inspektor-gadget=true in profile "addons-110926"
	I1002 06:37:14.497681  813918 addons.go:238] Setting addon inspektor-gadget=true in "addons-110926"
	I1002 06:37:14.497701  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.498032  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498105  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498760  813918 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.498784  813918 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-110926"
	I1002 06:37:14.498819  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.499233  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.504834  813918 addons.go:69] Setting metrics-server=true in profile "addons-110926"
	I1002 06:37:14.504923  813918 addons.go:238] Setting addon metrics-server=true in "addons-110926"
	I1002 06:37:14.504988  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.505608  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.507518  813918 out.go:179] * Verifying Kubernetes components...
	I1002 06:37:14.507725  813918 addons.go:69] Setting cloud-spanner=true in profile "addons-110926"
	I1002 06:37:14.507753  813918 addons.go:238] Setting addon cloud-spanner=true in "addons-110926"
	I1002 06:37:14.507795  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.508276  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.519123  813918 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.519204  813918 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-110926"
	I1002 06:37:14.519258  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523209  813918 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-110926"
	I1002 06:37:14.523335  813918 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:14.523396  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523909  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.524419  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.536906  813918 addons.go:69] Setting registry=true in profile "addons-110926"
	I1002 06:37:14.536941  813918 addons.go:238] Setting addon registry=true in "addons-110926"
	I1002 06:37:14.536983  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.537475  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.539289  813918 addons.go:69] Setting default-storageclass=true in profile "addons-110926"
	I1002 06:37:14.558568  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-110926"
	I1002 06:37:14.559019  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.559239  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:14.541208  813918 addons.go:69] Setting registry-creds=true in profile "addons-110926"
	I1002 06:37:14.561178  813918 addons.go:238] Setting addon registry-creds=true in "addons-110926"
	I1002 06:37:14.561363  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.541231  813918 addons.go:69] Setting storage-provisioner=true in profile "addons-110926"
	I1002 06:37:14.563047  813918 addons.go:238] Setting addon storage-provisioner=true in "addons-110926"
	I1002 06:37:14.563932  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.566547  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541239  813918 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-110926"
	I1002 06:37:14.579820  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-110926"
	I1002 06:37:14.580221  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.586764  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541246  813918 addons.go:69] Setting volcano=true in profile "addons-110926"
	I1002 06:37:14.607872  813918 addons.go:238] Setting addon volcano=true in "addons-110926"
	I1002 06:37:14.541349  813918 addons.go:69] Setting volumesnapshots=true in profile "addons-110926"
	I1002 06:37:14.607929  813918 addons.go:238] Setting addon volumesnapshots=true in "addons-110926"
	I1002 06:37:14.607950  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.556898  813918 addons.go:69] Setting gcp-auth=true in profile "addons-110926"
	I1002 06:37:14.624993  813918 mustload.go:65] Loading cluster: addons-110926
	I1002 06:37:14.625253  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.625626  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556924  813918 addons.go:69] Setting ingress=true in profile "addons-110926"
	I1002 06:37:14.631873  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.632366  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556929  813918 addons.go:69] Setting ingress-dns=true in profile "addons-110926"
	I1002 06:37:14.632643  813918 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:37:14.631728  813918 addons.go:238] Setting addon ingress=true in "addons-110926"
	I1002 06:37:14.633388  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.633841  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.650708  813918 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:37:14.654882  813918 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:37:14.654909  813918 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:37:14.654981  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.659338  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.671893  813918 addons.go:238] Setting addon ingress-dns=true in "addons-110926"
	I1002 06:37:14.671956  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.672451  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.681943  813918 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:37:14.682145  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:37:14.682171  813918 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:37:14.682243  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.730779  813918 addons.go:238] Setting addon default-storageclass=true in "addons-110926"
	I1002 06:37:14.730824  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.731463  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.736081  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:37:14.743901  813918 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:14.748859  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:37:14.749029  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.798861  813918 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:37:14.801456  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:37:14.801501  813918 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:37:14.801637  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.840051  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:37:14.844935  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:37:14.848913  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:37:14.851733  813918 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:37:14.854520  813918 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:37:14.857638  813918 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:37:14.858717  813918 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:14.858738  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:37:14.858817  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.860526  813918 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:37:14.860546  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:37:14.860632  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.893874  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:37:14.894058  813918 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:37:14.897434  813918 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:14.897458  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:37:14.897547  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.918428  813918 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-110926"
	I1002 06:37:14.918472  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.918875  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.921121  813918 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:37:14.925950  813918 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:14.925974  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:37:14.926042  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.945293  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.949541  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:37:14.956438  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:37:14.957575  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.966829  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:37:14.967843  813918 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:37:14.983357  813918 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 06:37:14.991256  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:37:14.991531  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:37:14.991690  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.992663  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:14.992678  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:37:14.992742  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.996512  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:14.996904  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:37:14.996921  813918 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:37:14.996989  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.005391  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:37:15.005812  813918 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:37:15.006640  813918 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:15.006661  813918 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:37:15.006739  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.008284  813918 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 06:37:15.009342  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:37:15.009438  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:37:15.009541  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.028005  813918 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 06:37:15.033152  813918 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:15.033183  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 06:37:15.033275  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.054617  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.055541  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:15.055750  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:15.055763  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:37:15.055832  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.061085  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.061106  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:37:15.061173  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.074564  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.081642  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.111200  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.136860  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:15.148801  813918 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:37:15.151741  813918 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:37:15.156261  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:15.156284  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:37:15.156355  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.169924  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.193516  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.199715  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.214370  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237018  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237601  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.243930  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.244071  813918 retry.go:31] will retry after 305.561491ms: ssh: handshake failed: EOF
	I1002 06:37:15.251932  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.255879  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.259811  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.259836  813918 retry.go:31] will retry after 210.072349ms: ssh: handshake failed: EOF
	I1002 06:37:15.265683  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.272079  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.565323  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.565348  813918 retry.go:31] will retry after 243.153386ms: ssh: handshake failed: EOF
	I1002 06:37:15.846286  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:37:15.846311  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:37:15.944527  813918 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:15.944599  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:37:15.970354  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.985885  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:16.012665  813918 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:37:16.012693  813918 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:37:16.019458  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:37:16.019485  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:37:16.043516  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:37:16.043539  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:37:16.060218  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:16.072624  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:16.090843  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:16.096286  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:37:16.096364  813918 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:37:16.184119  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:16.205029  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:16.206409  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:16.211099  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:16.221140  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:16.281478  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:37:16.281550  813918 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:37:16.294235  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:37:16.294308  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:37:16.314044  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:37:16.314122  813918 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:37:16.314878  813918 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.314923  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:37:16.334271  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.435552  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:37:16.435625  813918 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:37:16.486137  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:16.508790  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.508817  813918 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:37:16.527074  813918 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.79094086s)
	I1002 06:37:16.527103  813918 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 06:37:16.527172  813918 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.390287567s)
	I1002 06:37:16.527930  813918 node_ready.go:35] waiting up to 6m0s for node "addons-110926" to be "Ready" ...
	I1002 06:37:16.692302  813918 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.692321  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:37:16.739744  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:37:16.739768  813918 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:37:16.803024  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.866551  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.918292  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:16.918317  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:37:16.976907  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:37:16.976934  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:37:17.032696  813918 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-110926" context rescaled to 1 replicas
	I1002 06:37:17.174089  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:37:17.174115  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:37:17.194531  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:17.590550  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:37:17.590575  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:37:17.985718  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:37:17.985751  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:37:18.258016  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:37:18.258042  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:37:18.426273  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:37:18.426298  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	W1002 06:37:18.558468  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:18.892311  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:37:18.892338  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:37:19.094159  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:37:19.094182  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:37:19.262380  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:37:19.262404  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:37:19.445644  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:37:19.445669  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:37:19.720946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 06:37:21.041084  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:21.578538  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.608100964s)
	I1002 06:37:21.578618  813918 addons.go:479] Verifying addon ingress=true in "addons-110926"
	I1002 06:37:21.579021  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.5930618s)
	I1002 06:37:21.579193  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.518951153s)
	I1002 06:37:21.579261  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.506611096s)
	I1002 06:37:21.582085  813918 out.go:179] * Verifying ingress addon...
	I1002 06:37:21.586543  813918 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:37:21.655191  813918 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:37:21.655263  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.115015  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.583411  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:37:22.583564  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.610354  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:22.612089  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.737638  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:37:22.767377  813918 addons.go:238] Setting addon gcp-auth=true in "addons-110926"
	I1002 06:37:22.767434  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:22.767894  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:22.793827  813918 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:37:22.793887  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.830306  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:23.096079  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:23.101826  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.167688  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (7.0767591s)
	I1002 06:37:23.167794  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.983606029s)
	W1002 06:37:23.167817  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167835  813918 retry.go:31] will retry after 146.597414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167865  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.9627765s)
	I1002 06:37:23.167924  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.96145652s)
	I1002 06:37:23.167989  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.956824802s)
	I1002 06:37:23.168168  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.946960517s)
	I1002 06:37:23.168215  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.833882657s)
	I1002 06:37:23.168229  813918 addons.go:479] Verifying addon registry=true in "addons-110926"
	I1002 06:37:23.168432  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.682270471s)
	I1002 06:37:23.168504  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.365459957s)
	I1002 06:37:23.168515  813918 addons.go:479] Verifying addon metrics-server=true in "addons-110926"
	I1002 06:37:23.168593  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.302013657s)
	W1002 06:37:23.168612  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168628  813918 retry.go:31] will retry after 145.945512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168670  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.974112429s)
	I1002 06:37:23.171600  813918 out.go:179] * Verifying registry addon...
	I1002 06:37:23.175423  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:37:23.175675  813918 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-110926 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:37:23.215812  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.494815173s)
	I1002 06:37:23.215842  813918 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:23.218592  813918 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:37:23.218725  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:23.222422  813918 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:37:23.223098  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:37:23.225306  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:37:23.225336  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:37:23.265230  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:37:23.265257  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:37:23.271284  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:23.271303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.301079  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.301100  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:37:23.315262  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:23.315479  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:23.362438  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:23.362461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:23.371447  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.590215  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.690482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.726143  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.091791  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.192769  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.240956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.605709  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.703226  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.726522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.936549  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.621028233s)
	I1002 06:37:24.936718  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.621420893s)
	W1002 06:37:24.936789  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936837  813918 retry.go:31] will retry after 561.608809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936908  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.565434855s)
	I1002 06:37:24.939978  813918 addons.go:479] Verifying addon gcp-auth=true in "addons-110926"
	I1002 06:37:24.944986  813918 out.go:179] * Verifying gcp-auth addon...
	I1002 06:37:24.948596  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:37:24.951413  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:37:24.951434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.090748  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.178550  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.226439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.452219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.499574  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:25.531518  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:25.589865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.726612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.951542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.090750  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.179030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.226732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:26.317076  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.317226  813918 retry.go:31] will retry after 583.727209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.452148  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.589788  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.683078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.727068  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:26.901144  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:26.952896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:27.091613  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:27.226561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.451348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:27.531649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:27.591525  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.683031  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:27.712297  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.712326  813918 retry.go:31] will retry after 648.169313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.726104  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.090463  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.191332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.226482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.360900  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:28.452621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.684494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.726619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.952459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.090817  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.180514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:29.185770  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.185799  813918 retry.go:31] will retry after 638.486695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.226864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.451636  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.589804  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.683512  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:29.726574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.824932  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:29.952114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:30.032649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:30.090885  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.179094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:30.226154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.452508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:30.592222  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.684732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:30.698805  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.698840  813918 retry.go:31] will retry after 1.386655025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.726921  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.951637  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.090673  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.178664  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.226447  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.590331  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.726337  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.952229  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:32.086627  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:32.090653  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.179238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.226721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:32.452452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:32.530986  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:32.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.683805  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:32.884199  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.884242  813918 retry.go:31] will retry after 1.764941661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.090182  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.226874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.451508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.590092  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.725974  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.951836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:34.090782  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.178819  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.226525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.452486  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:34.531295  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:34.590650  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.649946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:34.686870  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.726748  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.952390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:35.093119  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.179530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.226048  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.451917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:35.484501  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.484530  813918 retry.go:31] will retry after 6.007881753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.590705  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.683551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.726503  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.952327  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.090688  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.191481  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.226150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.452471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.590726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.683932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.727072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.951909  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:37.032811  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:37.090041  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.226683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.451377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:37.590155  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.683502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.726422  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.951666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.226290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.452969  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.589891  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.678445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.726426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.951569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:39.090363  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.178554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.226682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.451688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:39.531480  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:39.589495  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.683560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.726605  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.951696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.090353  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.179467  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.226430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.451667  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.590213  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.682834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.726735  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.951452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.090424  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.178251  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.225935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.493320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:41.531920  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:41.590388  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.727080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.951832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.097513  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.180007  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.228335  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:42.397373  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.397404  813918 retry.go:31] will retry after 6.331757331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.452908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.590432  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.683443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.726508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:42.952318  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.090165  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.178978  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.225896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.451987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.590602  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.678528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.726661  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.951424  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:44.031312  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:44.090520  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.226569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.451727  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:44.596784  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.697937  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.726640  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.951415  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.090703  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.179490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.227523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.451631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.589687  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.683601  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.727673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.951624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:46.031927  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:46.090068  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.178708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.226451  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:46.590533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.678457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.726355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.952193  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.090132  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.179505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.226590  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.451700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.590360  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.683040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.725863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.952219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:48.090642  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.178440  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.226648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.451752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:48.531666  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:48.590304  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.678358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.726321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.729320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:48.951489  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:49.091175  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.180116  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.226101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.452407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:49.530266  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.530298  813918 retry.go:31] will retry after 12.414314859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.590599  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.683495  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.726800  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.951645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.090598  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.178639  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.226627  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.451589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.590544  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.682812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.726927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.951882  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:51.030659  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:51.089892  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.179276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.225934  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:51.589726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.683005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.725957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.951996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.091773  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.178278  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.226119  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.451977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.590251  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.683413  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.726061  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.952248  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:53.031163  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:53.090127  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.178995  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.227062  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.452030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:53.590043  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.683319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.726034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.951951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.090498  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.178558  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.226461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.451500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.590406  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.683724  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.726962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.952006  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:55.031442  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:55.091214  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.179018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.225804  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:55.590030  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.682894  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.726632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.951851  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.090254  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.179316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.225963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.451980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.589903  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.726710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.969890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.039661  813918 node_ready.go:49] node "addons-110926" is "Ready"
	I1002 06:37:57.039759  813918 node_ready.go:38] duration metric: took 40.511800003s for node "addons-110926" to be "Ready" ...
	I1002 06:37:57.039788  813918 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:57.039875  813918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.093303  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.094841  813918 api_server.go:72] duration metric: took 42.597646349s to wait for apiserver process to appear ...
	I1002 06:37:57.094869  813918 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:37:57.094891  813918 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 06:37:57.110477  813918 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 06:37:57.112002  813918 api_server.go:141] control plane version: v1.34.1
	I1002 06:37:57.112039  813918 api_server.go:131] duration metric: took 17.162356ms to wait for apiserver health ...
	I1002 06:37:57.112050  813918 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:37:57.164751  813918 system_pods.go:59] 19 kube-system pods found
	I1002 06:37:57.164836  813918 system_pods.go:61] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.164843  813918 system_pods.go:61] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.164850  813918 system_pods.go:61] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.164855  813918 system_pods.go:61] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.164860  813918 system_pods.go:61] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.164866  813918 system_pods.go:61] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.164895  813918 system_pods.go:61] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.164906  813918 system_pods.go:61] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.164911  813918 system_pods.go:61] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.164915  813918 system_pods.go:61] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.164927  813918 system_pods.go:61] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.164931  813918 system_pods.go:61] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.164936  813918 system_pods.go:61] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.164940  813918 system_pods.go:61] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.164952  813918 system_pods.go:61] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.164956  813918 system_pods.go:61] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.164969  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.164978  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.164984  813918 system_pods.go:61] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.164996  813918 system_pods.go:74] duration metric: took 52.940352ms to wait for pod list to return data ...
	I1002 06:37:57.165020  813918 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:37:57.180144  813918 default_sa.go:45] found service account: "default"
	I1002 06:37:57.180178  813918 default_sa.go:55] duration metric: took 15.149731ms for default service account to be created ...
	I1002 06:37:57.180188  813918 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:37:57.222552  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:57.222577  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.223365  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.223410  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.223418  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.223424  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.223428  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.223442  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.223456  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.223462  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.223474  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.223481  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.223485  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.223492  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.223496  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.223503  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.223507  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.223510  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.223514  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.223521  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.223531  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.223536  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.223550  813918 retry.go:31] will retry after 203.421597ms: missing components: kube-dns
	I1002 06:37:57.317769  813918 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:57.317813  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.437762  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.437803  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.437810  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.437815  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.437821  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.437826  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.437841  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.437853  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.437869  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.437874  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.437877  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.437882  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.437900  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.437905  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.437909  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.437913  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.437926  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.437937  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437946  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437955  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.437969  813918 retry.go:31] will retry after 264.460556ms: missing components: kube-dns
	I1002 06:37:57.457586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.591211  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.684302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.707934  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.707975  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.707990  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:57.708000  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:57.708018  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.708030  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.708035  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.708040  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.708051  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.708113  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:57.708129  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.708172  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.708184  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:57.708195  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:57.708207  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:57.708220  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:57.708228  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:57.708247  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708255  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708270  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:57.708285  813918 retry.go:31] will retry after 422.985157ms: missing components: kube-dns
	I1002 06:37:57.742917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.952834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.091317  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.137271  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.137312  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:58.137322  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.137331  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.137338  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.137342  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.137350  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.137355  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.137359  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.137366  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.137375  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.137380  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.137386  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.137399  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.137411  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.137417  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.137426  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.137433  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137444  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137451  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.137467  813918 retry.go:31] will retry after 586.146569ms: missing components: kube-dns
	I1002 06:37:58.178407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.235878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.452723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.614086  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.705574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.752782  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.752871  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Running
	I1002 06:37:58.752902  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.752951  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.752984  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.753015  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.753040  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.753071  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.753100  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.753128  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.753156  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.753185  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.753215  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.753246  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.753287  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.753323  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.753344  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.753369  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753402  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753429  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.753455  813918 system_pods.go:126] duration metric: took 1.573257013s to wait for k8s-apps to be running ...
	I1002 06:37:58.753478  813918 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:37:58.753557  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:37:58.756092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.811373  813918 system_svc.go:56] duration metric: took 57.886892ms WaitForService to wait for kubelet
	I1002 06:37:58.811449  813918 kubeadm.go:586] duration metric: took 44.314256903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:58.811493  813918 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:37:58.822249  813918 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 06:37:58.822353  813918 node_conditions.go:123] node cpu capacity is 2
	I1002 06:37:58.822383  813918 node_conditions.go:105] duration metric: took 10.860686ms to run NodePressure ...
	I1002 06:37:58.822420  813918 start.go:241] waiting for startup goroutines ...
	I1002 06:37:58.952958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.090849  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.194378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.293675  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.453551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.590199  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.683743  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.727149  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.952566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.095335  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.179662  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.233910  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.456053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.590708  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.683163  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.726621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.952293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.091005  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.179669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.229085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.453177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.591279  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.686492  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.728097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.945617  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:01.952810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.090686  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.179657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.228561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:02.452023  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.591508  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.683154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.726517  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:38:02.824299  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.824331  813918 retry.go:31] will retry after 15.691806375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.952380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.090609  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.178940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.227145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.453458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.590296  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.683856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.728071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.952283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.091664  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.192092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.226458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.451525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.589908  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.730121  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.952803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.091341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.179246  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.227241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.453166  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.590701  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.678855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.729441  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.955761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.089976  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.179542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.229669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.451663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.590195  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.684205  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.784414  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.952931  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.090633  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.179271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.226645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.590940  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.683125  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.726314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.958423  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.089866  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.178562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.226685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.452416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.589770  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.683752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.726663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.952521  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.090474  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.179170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.227253  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.453357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.593377  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.684130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.728107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.951741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.090984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.181589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.227685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.590276  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.684315  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.726459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.951730  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.094349  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.181744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.226987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.452812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.589905  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.684532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.727310  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.952952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.090716  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.178859  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.227650  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.452172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.590288  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.684016  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.727454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.952912  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.089873  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.179357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.226476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.452233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.590829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.683018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.727319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.952542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.091679  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.180387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.229029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.453283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.593239  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.684343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.727726  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.951591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.090426  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.452049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.591161  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.726700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.951767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.090224  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.179552  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.230312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.452584  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.590173  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.728540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.952802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.089859  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.227103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.452592  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.589995  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.683737  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.727124  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.952069  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.090149  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.227063  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.452179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.516517  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:18.591793  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.683303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.726902  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.951881  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:19.090407  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.179390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.280453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.453053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:38:19.506255  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.506287  813918 retry.go:31] will retry after 24.46264979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.591253  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.683612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.727161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.951604  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.090820  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.179282  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.226653  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.451718  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.590946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.683133  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.726532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.952036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.090532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.179243  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.227567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.452954  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.590813  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.683988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.726704  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.955708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.090204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.179312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.226758  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.451702  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.590436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.726810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.952518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.090640  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.226432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.452557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.589536  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.726387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.951660  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.089946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.179032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.231204  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.452096  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.591481  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.684150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.727560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.951946  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.090564  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.180720  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.227767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.452182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.590552  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.683982  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.727145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.952505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.096097  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.199167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.227457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.589950  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.682877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.728464  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.952825  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.090029  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.178693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.227164  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.451877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.694494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.726681  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.953022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.090718  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.178712  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.226849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.451699  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.590634  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.680358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.727806  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.952386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.090865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.192262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.296040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.458956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.592945  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.696528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.727745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.960224  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.108669  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.181176  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.229077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.453626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.590233  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.688386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.962237  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.091531  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.180490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.229509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.452749  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.591491  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.726970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.952445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.091436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.190896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.228381  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.452736  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.590064  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.684030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.726390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.951770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.090909  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.228094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.452528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.590375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.684236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.727041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.952649  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:34.090690  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.178430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.227390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:34.452820  813918 kapi.go:107] duration metric: took 1m9.5042235s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 06:38:34.456518  813918 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-110926 cluster.
	I1002 06:38:34.459299  813918 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 06:38:34.462514  813918 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 06:38:34.590456  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.683783  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.726876  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.091815  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.192181  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.225996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.590532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.683177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.727077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.090514  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.178631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.226657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.590586  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.684420  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.726745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.090769  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.193241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.227067  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.591255  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.682734  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.727297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.089746  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.178757  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.227287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.591547  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.691271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.727108  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.106229  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.202273  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.228516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.589988  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.679442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.726895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.094511  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.179452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.237240  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.601942  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.693742  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.738619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.091045  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.191515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.226632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.591721  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.726863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.091861  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.204238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.590297  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.683271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.727579  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.091018  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.179103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.226868  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.591731  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.684032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.726500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.969756  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:44.090261  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.179366  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.228188  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:44.592341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.686940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.727784  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.092283  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.178091  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.20829608s)
	W1002 06:38:45.178208  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.178250  813918 retry.go:31] will retry after 22.26617142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.179543  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.236432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.590441  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.679320  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.727621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:46.090405  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.178426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:46.226663  813918 kapi.go:107] duration metric: took 1m23.00356106s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 06:38:46.589619  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.683261  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.089734  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.179374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.592660  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.090007  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.178644  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.591375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.683509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.089829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.591248  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.691276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.089984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.179171  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.590696  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.683346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.089635  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.590723  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.683306  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.090482  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.190696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.678787  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.090135  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.179421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.590204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.684303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.089742  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.178289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.591054  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.692841  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.091556  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.590831  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.682962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.590836  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.683198  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.090570  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.683132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.089540  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.179053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.590839  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.683962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.090850  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.190988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.590732  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.685032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.114597  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.198802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.590774  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.683043  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.090771  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.178723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.590300  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.684480  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.091506  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.180050  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.591681  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.686987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.092104  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.180518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.590550  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.684084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.091333  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.685968  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.091208  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.179114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.593116  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.693180  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.099807  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:06.192434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.591063  813918 kapi.go:107] duration metric: took 1m45.004516868s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 06:39:06.691162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.178929  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.445436  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:39:07.683258  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:08.179496  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:39:08.321958  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:39:08.322050  813918 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:39:08.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.179353  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.686227  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.179510  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.179458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.679580  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.179918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.684042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.178652  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.685874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.179294  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.688744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.178402  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.684134  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.178182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.682141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.179203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.684865  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.183409  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.683201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.178867  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.679950  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.179378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.683751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.179070  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.679127  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.178339  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.682554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.179809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.684571  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.684796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.178633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.683087  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.178740  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.683803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.178621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.679141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.178920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.684290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.179325  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.680059  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.180120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.683936  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.178444  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.683250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.684538  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.179130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.684267  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.179364  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.684136  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.178488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.683770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.179826  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.683998  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.179895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.683890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.180914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.686625  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.179680  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.684314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.178731  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.682866  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.180532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.685515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.178761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.678674  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.180677  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.683093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.682966  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.178714  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.687786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.180034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.682439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.179416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.685544  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.179302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.685100  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.179287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.683778  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.179021  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.679097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.178970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.684700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.179476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.684994  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.178796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.679165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.178666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.684967  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.178854  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.678696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.179624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.683296  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.180450  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.687218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.195539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.689354  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.178732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.685212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.179265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.678460  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.686281  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.179400  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.679175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.179017  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.683057  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.179262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.684658  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.179829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.683098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.178903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.686212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.179744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.682952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.685085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.179154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.683453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.179437  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.683490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.179250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.684690  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.179775  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.684387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.678523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.179146  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.179689  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.682903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.178772  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.685172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.178915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.684537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.178688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.681514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.179537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.683064  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.682793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.179279  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.685175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.178553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.683682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.179629  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.679433  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.178986  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.683516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.178938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.684313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.179037  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.682849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.180161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.683924  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.178283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.683997  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.179049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.685786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.179179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.682830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.179638  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.683135  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.178744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.684184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.179123  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.179078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.682970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.179304  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.684431  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.179468  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.683907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.178963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.684491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.180147  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.678812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.178520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.679177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.178790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.684374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.179855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.684397  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.179055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.685615  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.178939  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.680235  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.178829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.682679  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.179766  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.686979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.178641  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.683095  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.178582  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.682578  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.179361  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.684019  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.683211  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.180830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.685818  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.179776  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.683755  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.179597  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.683541  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.178536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.679350  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.183218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.683948  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.179617  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.681398  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.200089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.683523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.180022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.682762  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.179798  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.179630  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.683920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.178316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.686534  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.179292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.683293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.178370  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.682944  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.178545  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.685071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.179215  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.684453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.688380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.179014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.682840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.179693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.179386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.679132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.178565  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.680539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.684344  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.179591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.679368  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.178436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.683864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.180546  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.683586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.679618  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.684107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.684034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.178317  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.684041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.178322  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.683407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.179139  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.683117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.178439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.685938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.178476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.683871  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.178257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.684421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.178363  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.684075  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.178491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.684622  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.679029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.179857  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.684822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.178471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.682266  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.178454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.683741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.684238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.179255  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.685850  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.179285  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.684332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.178487  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.679840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.178710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.684329  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.179191  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.685465  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.179295  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.684802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.179488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.683626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.179090  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.683827  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.211958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.683203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.179683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.684688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.179790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.684540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.179257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.684514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.683477  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.178765  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.684151  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.179311  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.684698  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.179522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.684199  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.178816  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.683369  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.178888  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.683785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.179801  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.684918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.179419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.686564  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.179115  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.679606  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.179733  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.178170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.679142  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.679408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.685245  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.184879  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.679309  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.180107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.685443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.178916  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.682980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.178340  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.685958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.178346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.678858  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.179520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.685162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.178663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.683927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.178987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.683518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.179084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.685719  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.178949  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.179144  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.678751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.178975  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.685293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.178566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.682732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.686648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.178770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.682752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.179886  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.683072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.178408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.683343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.179005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.679908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.178619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.685331  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.179236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.683822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.179233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.684864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.684351  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.180700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.683915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.179907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.683172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.178856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.683739  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.179113  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.684228  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.178497  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.680321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.178685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.684377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.178668  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.683298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.679836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.179289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.179308  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.685527  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.179502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.682722  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.179247  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.691933  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.684101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.178537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.679390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.679292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.178807  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.679635  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.179574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.685788  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.179536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.679723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.178926  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.205259  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.678979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.178844  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.684358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.178792  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.680055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.183250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.685665  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.179382  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.179323  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.179642  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.678672  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.179393  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.688221  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.178875  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.683313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.178669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.679683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.179098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.681721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.181436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.683878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.179394  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.682260  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.179274  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.679117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.178213  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.684682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.679759  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.179473  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.683157  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.178763  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.679298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.179659  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.683416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.179914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.684427  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.178932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.684548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.179404  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.683536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.179167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.685131  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.178507  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.683442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.679774  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.179201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.679574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.683089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.178834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.684250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.178466  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.684419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.680107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.178342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.179349  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.685184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.178165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.179446  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.683832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.683553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.179562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.179009  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.684080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.682553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.179297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.683633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.176387  813918 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1002 06:43:23.176421  813918 kapi.go:107] duration metric: took 6m0.001003242s to wait for kubernetes.io/minikube-addons=registry ...
	W1002 06:43:23.176505  813918 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1002 06:43:23.179649  813918 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, default-storageclass, volcano, nvidia-device-plugin, storage-provisioner, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, gcp-auth, csi-hostpath-driver, ingress
	I1002 06:43:23.182525  813918 addons.go:514] duration metric: took 6m8.685068561s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner default-storageclass volcano nvidia-device-plugin storage-provisioner registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots gcp-auth csi-hostpath-driver ingress]
	I1002 06:43:23.182578  813918 start.go:246] waiting for cluster config update ...
	I1002 06:43:23.182605  813918 start.go:255] writing updated cluster config ...
	I1002 06:43:23.182910  813918 ssh_runner.go:195] Run: rm -f paused
	I1002 06:43:23.186967  813918 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:23.191359  813918 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.195909  813918 pod_ready.go:94] pod "coredns-66bc5c9577-s68lt" is "Ready"
	I1002 06:43:23.195939  813918 pod_ready.go:86] duration metric: took 4.553514ms for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.198221  813918 pod_ready.go:83] waiting for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.202513  813918 pod_ready.go:94] pod "etcd-addons-110926" is "Ready"
	I1002 06:43:23.202537  813918 pod_ready.go:86] duration metric: took 4.291712ms for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.204756  813918 pod_ready.go:83] waiting for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.208864  813918 pod_ready.go:94] pod "kube-apiserver-addons-110926" is "Ready"
	I1002 06:43:23.208890  813918 pod_ready.go:86] duration metric: took 4.040561ms for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.211197  813918 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.591502  813918 pod_ready.go:94] pod "kube-controller-manager-addons-110926" is "Ready"
	I1002 06:43:23.591528  813918 pod_ready.go:86] duration metric: took 380.304031ms for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.792134  813918 pod_ready.go:83] waiting for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.192193  813918 pod_ready.go:94] pod "kube-proxy-4zvzf" is "Ready"
	I1002 06:43:24.192225  813918 pod_ready.go:86] duration metric: took 400.063711ms for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.391575  813918 pod_ready.go:83] waiting for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791416  813918 pod_ready.go:94] pod "kube-scheduler-addons-110926" is "Ready"
	I1002 06:43:24.791440  813918 pod_ready.go:86] duration metric: took 399.838153ms for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791453  813918 pod_ready.go:40] duration metric: took 1.604452407s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:24.848923  813918 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 06:43:24.852286  813918 out.go:179] * Done! kubectl is now configured to use "addons-110926" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	f006bdfdbfa9c       1611cd07b61d5       15 minutes ago      Running             busybox                   0                   396f52d70bdd3       busybox                                    default
	8db7a8fd91b3a       bc6bf68f85c70       16 minutes ago      Running             registry                  0                   2574946f7674b       registry-66898fdd98-926mp                  kube-system
	5f7d9891cc455       5ed383cb88c34       31 minutes ago      Running             controller                0                   6c717320771c8       ingress-nginx-controller-9cc49f96f-srz99   ingress-nginx
	5f68f17265ee3       deda3ad36c19b       31 minutes ago      Running             gadget                    0                   4c1a07ae3ab5b       gadget-5sxf6                               gadget
	739d12f7cb55c       c67c707f59d87       32 minutes ago      Exited              patch                     0                   bb748c608a5b6       ingress-nginx-admission-patch-bq878        ingress-nginx
	591026b1dba39       c67c707f59d87       32 minutes ago      Exited              create                    0                   cb5a57455ef86       ingress-nginx-admission-create-lw8gl       ingress-nginx
	5ad865f2d99af       7b85e0fbfd435       32 minutes ago      Running             registry-proxy            0                   f2c6d58f83a8d       registry-proxy-bqxnl                       kube-system
	4829c9264d5b3       ba04bb24b9575       32 minutes ago      Running             storage-provisioner       0                   cd62db6aa4ca0       storage-provisioner                        kube-system
	d607380a0ea95       138784d87c9c5       32 minutes ago      Running             coredns                   0                   97bcb21e01196       coredns-66bc5c9577-s68lt                   kube-system
	001c4797204fc       b1a8c6f707935       33 minutes ago      Running             kindnet-cni               0                   a8dbd581dae29       kindnet-zb4h8                              kube-system
	205ba78bdcdf4       05baa95f5142d       33 minutes ago      Running             kube-proxy                0                   1c95f15f187e7       kube-proxy-4zvzf                           kube-system
	7d5d1641aee07       43911e833d64d       33 minutes ago      Running             kube-apiserver            0                   111e5d5f57119       kube-apiserver-addons-110926               kube-system
	b56ea6dbe0e21       b5f57ec6b9867       33 minutes ago      Running             kube-scheduler            0                   740338c713381       kube-scheduler-addons-110926               kube-system
	dd74ed9d21ed1       7eb2c6ff0c5a7       33 minutes ago      Running             kube-controller-manager   0                   408527a4c051e       kube-controller-manager-addons-110926      kube-system
	8be3089b4391b       a1894772a478e       33 minutes ago      Running             etcd                      0                   f832da367e6b5       etcd-addons-110926                         kube-system
	
	
	==> containerd <==
	Oct 02 07:04:54 addons-110926 containerd[753]: time="2025-10-02T07:04:54.245716693Z" level=info msg="PullImage \"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\""
	Oct 02 07:04:54 addons-110926 containerd[753]: time="2025-10-02T07:04:54.248095939Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:04:54 addons-110926 containerd[753]: time="2025-10-02T07:04:54.402286413Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:04:54 addons-110926 containerd[753]: time="2025-10-02T07:04:54.776584419Z" level=error msg="PullImage \"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\" failed" error="failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:04:54 addons-110926 containerd[753]: time="2025-10-02T07:04:54.776638341Z" level=info msg="stop pulling image docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: active requests=0, bytes read=11732"
	Oct 02 07:05:38 addons-110926 containerd[753]: time="2025-10-02T07:05:38.246794228Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 02 07:05:38 addons-110926 containerd[753]: time="2025-10-02T07:05:38.249257369Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:05:38 addons-110926 containerd[753]: time="2025-10-02T07:05:38.396815916Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:05:38 addons-110926 containerd[753]: time="2025-10-02T07:05:38.668443547Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:05:38 addons-110926 containerd[753]: time="2025-10-02T07:05:38.668563675Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Oct 02 07:07:28 addons-110926 containerd[753]: time="2025-10-02T07:07:28.244374240Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 02 07:07:28 addons-110926 containerd[753]: time="2025-10-02T07:07:28.246749238Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:07:28 addons-110926 containerd[753]: time="2025-10-02T07:07:28.386894885Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:07:28 addons-110926 containerd[753]: time="2025-10-02T07:07:28.778372391Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:07:28 addons-110926 containerd[753]: time="2025-10-02T07:07:28.778476537Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21215"
	Oct 02 07:08:23 addons-110926 containerd[753]: time="2025-10-02T07:08:23.244847518Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 02 07:08:23 addons-110926 containerd[753]: time="2025-10-02T07:08:23.247285264Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:08:23 addons-110926 containerd[753]: time="2025-10-02T07:08:23.362997427Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:08:23 addons-110926 containerd[753]: time="2025-10-02T07:08:23.786134841Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:08:23 addons-110926 containerd[753]: time="2025-10-02T07:08:23.786385668Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=21300"
	Oct 02 07:10:05 addons-110926 containerd[753]: time="2025-10-02T07:10:05.245819492Z" level=info msg="PullImage \"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\""
	Oct 02 07:10:05 addons-110926 containerd[753]: time="2025-10-02T07:10:05.248121063Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:10:05 addons-110926 containerd[753]: time="2025-10-02T07:10:05.387918379Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:10:05 addons-110926 containerd[753]: time="2025-10-02T07:10:05.684046485Z" level=error msg="PullImage \"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\" failed" error="failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:10:05 addons-110926 containerd[753]: time="2025-10-02T07:10:05.684112338Z" level=info msg="stop pulling image docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: active requests=0, bytes read=11047"
	
	
	==> coredns [d607380a0ea95122f5da6e25cf2168aa3ea1ff11f2efdf89f4a8c2d0e5150d23] <==
	[INFO] 10.244.0.10:50787 - 30802 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001101513s
	[INFO] 10.244.0.10:50787 - 65077 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000141051s
	[INFO] 10.244.0.10:50787 - 27995 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000170818s
	[INFO] 10.244.0.10:57105 - 40777 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000193627s
	[INFO] 10.244.0.10:57105 - 44741 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000111316s
	[INFO] 10.244.0.10:57105 - 25201 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093429s
	[INFO] 10.244.0.10:57105 - 38571 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084034s
	[INFO] 10.244.0.10:57105 - 24208 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076166s
	[INFO] 10.244.0.10:57105 - 56789 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000139811s
	[INFO] 10.244.0.10:57105 - 46307 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001361429s
	[INFO] 10.244.0.10:57105 - 10819 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.000882336s
	[INFO] 10.244.0.10:57105 - 62476 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092289s
	[INFO] 10.244.0.10:57105 - 29096 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.0000767s
	[INFO] 10.244.0.10:43890 - 1641 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123016s
	[INFO] 10.244.0.10:43890 - 1411 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136259s
	[INFO] 10.244.0.10:42249 - 55738 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014663s
	[INFO] 10.244.0.10:42249 - 56025 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119479s
	[INFO] 10.244.0.10:58600 - 45308 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118355s
	[INFO] 10.244.0.10:58600 - 45497 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012044s
	[INFO] 10.244.0.10:58816 - 38609 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0013196s
	[INFO] 10.244.0.10:58816 - 38806 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001169622s
	[INFO] 10.244.0.10:53569 - 36791 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135397s
	[INFO] 10.244.0.10:53569 - 36387 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116156s
	[INFO] 10.244.0.26:45800 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000375054s
	[INFO] 10.244.0.26:44881 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100551s
	
	
	==> describe nodes <==
	Name:               addons-110926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-110926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-110926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-110926
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:37:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-110926
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:10:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:07:46 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:07:46 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:07:46 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:07:46 +0000   Thu, 02 Oct 2025 06:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-110926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 852f460d42254382a140bbeecb584248
	  System UUID:                c6ea63c0-97bd-4894-b738-fecc8ba127ac
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  gadget                      gadget-5sxf6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-srz99    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         33m
	  kube-system                 coredns-66bc5c9577-s68lt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     33m
	  kube-system                 etcd-addons-110926                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33m
	  kube-system                 kindnet-zb4h8                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33m
	  kube-system                 kube-apiserver-addons-110926                250m (12%)    0 (0%)      0 (0%)           0 (0%)         33m
	  kube-system                 kube-controller-manager-addons-110926       200m (10%)    0 (0%)      0 (0%)           0 (0%)         33m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
	  kube-system                 kube-proxy-4zvzf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
	  kube-system                 kube-scheduler-addons-110926                100m (5%)     0 (0%)      0 (0%)           0 (0%)         33m
	  kube-system                 registry-66898fdd98-926mp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
	  kube-system                 registry-proxy-bqxnl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         32m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         33m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 33m                kube-proxy       
	  Normal   Starting                 33m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  33m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    33m (x8 over 33m)  kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33m (x7 over 33m)  kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  33m (x8 over 33m)  kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   Starting                 33m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  33m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  33m                kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33m                kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33m                kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           33m                node-controller  Node addons-110926 event: Registered Node addons-110926 in Controller
	  Normal   NodeReady                32m                kubelet          Node addons-110926 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8be3089b4391b68797b9ff88ff2b0c3043e3281ca30bcb48a82169b26fb4081d] <==
	{"level":"warn","ts":"2025-10-02T06:37:44.548456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.563248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.624874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.689649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.719544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.736879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.755892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.770836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.790478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:39:30.364216Z","caller":"traceutil/trace.go:172","msg":"trace[134372730] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"123.100543ms","start":"2025-10-02T06:39:30.241102Z","end":"2025-10-02T06:39:30.364202Z","steps":["trace[134372730] 'process raft request'  (duration: 122.981302ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T06:47:04.880918Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1925}
	{"level":"info","ts":"2025-10-02T06:47:04.920117Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1925,"took":"38.610713ms","hash":2612370864,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5120000,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2025-10-02T06:47:04.920180Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2612370864,"revision":1925,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T06:52:04.887964Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2405}
	{"level":"info","ts":"2025-10-02T06:52:04.907361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2405,"took":"18.449885ms","hash":1927945438,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":3727360,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-10-02T06:52:04.907428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1927945438,"revision":2405,"compact-revision":1925}
	{"level":"info","ts":"2025-10-02T06:57:04.895109Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2864}
	{"level":"info","ts":"2025-10-02T06:57:04.926419Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2864,"took":"30.706949ms","hash":805286141,"current-db-size-bytes":9138176,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":5545984,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-02T06:57:04.926487Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":805286141,"revision":2864,"compact-revision":2405}
	{"level":"info","ts":"2025-10-02T07:02:04.901785Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3652}
	{"level":"info","ts":"2025-10-02T07:02:04.924487Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3652,"took":"21.535369ms","hash":3971856220,"current-db-size-bytes":9138176,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4923392,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-02T07:02:04.924535Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3971856220,"revision":3652,"compact-revision":2864}
	{"level":"info","ts":"2025-10-02T07:07:04.908071Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":4304}
	{"level":"info","ts":"2025-10-02T07:07:04.927732Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":4304,"took":"18.937901ms","hash":2577924456,"current-db-size-bytes":9138176,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":3325952,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2025-10-02T07:07:04.927787Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2577924456,"revision":4304,"compact-revision":3652}
	
	
	==> kernel <==
	 07:10:38 up  6:53,  0 user,  load average: 0.08, 0.28, 0.80
	Linux addons-110926 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [001c4797204fc8489af667e5dc44dc2de85bde6fbbb94189af8eaa6e51b826b8] <==
	I1002 07:08:36.725840       1 main.go:301] handling current node
	I1002 07:08:46.724991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:08:46.725026       1 main.go:301] handling current node
	I1002 07:08:56.724925       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:08:56.724960       1 main.go:301] handling current node
	I1002 07:09:06.723442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:09:06.723482       1 main.go:301] handling current node
	I1002 07:09:16.724937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:09:16.725200       1 main.go:301] handling current node
	I1002 07:09:26.728874       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:09:26.728912       1 main.go:301] handling current node
	I1002 07:09:36.731804       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:09:36.731840       1 main.go:301] handling current node
	I1002 07:09:46.724962       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:09:46.724998       1 main.go:301] handling current node
	I1002 07:09:56.725995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:09:56.726033       1 main.go:301] handling current node
	I1002 07:10:06.723423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:10:06.723457       1 main.go:301] handling current node
	I1002 07:10:16.722399       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:10:16.722459       1 main.go:301] handling current node
	I1002 07:10:26.726363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:10:26.726403       1 main.go:301] handling current node
	I1002 07:10:36.723140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:10:36.723240       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7d5d1641aee0712674398096e96919d3b125a32fedea7425f03406a609a25f01] <==
	W1002 06:55:16.150162       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1002 06:55:16.557861       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1002 06:55:34.021110       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57690: use of closed network connection
	E1002 06:55:34.271460       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57730: use of closed network connection
	E1002 06:55:34.454019       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57748: use of closed network connection
	I1002 06:57:07.469062       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:01:59.058631       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.126.237"}
	I1002 07:02:18.905132       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:02:18.905185       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:02:18.940540       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:02:18.940662       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:02:18.974134       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:02:18.974197       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:02:18.984189       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:02:18.984235       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1002 07:02:19.011187       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1002 07:02:19.011992       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1002 07:02:19.984325       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1002 07:02:20.012320       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1002 07:02:20.039227       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	E1002 07:02:20.253085       1 watch.go:272] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	I1002 07:02:36.318362       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1002 07:02:36.698362       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.131.93"}
	I1002 07:03:33.094807       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1002 07:07:07.469449       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [dd74ed9d21ed14fc6778ffc7add04a70910ec955742f31d4442b2c07c8ea86db] <==
	E1002 07:09:59.512314       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:10:04.708546       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:04.710287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:08.994985       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:08.996156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:10.258356       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:10.259456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:10.450078       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:10.451173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:11.869383       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:11.870791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:14.512978       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:10:14.568351       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:14.569478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:14.962343       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:14.963683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:16.889695       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:16.890919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:29.513402       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:10:32.525293       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:32.526674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:37.490514       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:37.491613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:10:37.688657       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:10:37.689976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [205ba78bdcdf484d8af0d0330d3a99ba39bdc20efa19428202c6c4cd7dfd9d33] <==
	I1002 06:37:16.426570       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:37:16.498503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:37:16.599091       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:37:16.599151       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:37:16.599225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:37:16.664219       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:37:16.664277       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:37:16.670034       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:37:16.670375       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:37:16.670399       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:37:16.671951       1 config.go:200] "Starting service config controller"
	I1002 06:37:16.671975       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:37:16.671996       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:37:16.672007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:37:16.672023       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:37:16.672032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:37:16.676259       1 config.go:309] "Starting node config controller"
	I1002 06:37:16.676302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:37:16.676311       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:37:16.772116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:37:16.772157       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:37:16.772192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b56ea6dbe0e218561ee35e4169c6c63e3160ecf828f68ed8b40ef0285f668b5e] <==
	I1002 06:37:08.301736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 06:37:08.302088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:37:08.302287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1002 06:37:08.298598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 06:37:08.303874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:37:08.304074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:37:08.304269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:37:08.304471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:37:08.308085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 06:37:08.317169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:37:08.317571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:37:08.317827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:37:08.317882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:37:08.317917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:37:08.317998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:37:08.318060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:37:08.325459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:37:08.325531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:37:08.325571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:37:08.325620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:37:08.325676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1002 06:37:09.602936       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 07:06:05.923232       1 framework.go:1298] "Plugin failed" err="binding volumes: context deadline exceeded" plugin="VolumeBinding" pod="default/test-local-path" node="addons-110926"
	E1002 07:06:05.924717       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running PreBind plugin \"VolumeBinding\": binding volumes: context deadline exceeded" logger="UnhandledError" pod="default/test-local-path"
	E1002 07:06:07.003030       1 schedule_one.go:191] "Status after running PostFilter plugins for pod" logger="UnhandledError" pod="default/test-local-path" status="not found"
	
	
	==> kubelet <==
	Oct 02 07:09:26 addons-110926 kubelet[1456]: E1002 07:09:26.244564    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e5b2eab0-6492-4ef7-830a-22a929549537"
	Oct 02 07:09:30 addons-110926 kubelet[1456]: E1002 07:09:30.245145    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:09:33 addons-110926 kubelet[1456]: I1002 07:09:33.243725    1456 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-926mp" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:09:37 addons-110926 kubelet[1456]: I1002 07:09:37.244426    1456 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bqxnl" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:09:37 addons-110926 kubelet[1456]: E1002 07:09:37.245423    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:09:40 addons-110926 kubelet[1456]: I1002 07:09:40.245021    1456 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:09:41 addons-110926 kubelet[1456]: E1002 07:09:41.245127    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e5b2eab0-6492-4ef7-830a-22a929549537"
	Oct 02 07:09:42 addons-110926 kubelet[1456]: E1002 07:09:42.244491    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:09:50 addons-110926 kubelet[1456]: E1002 07:09:50.245669    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:09:54 addons-110926 kubelet[1456]: E1002 07:09:54.244688    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:09:56 addons-110926 kubelet[1456]: E1002 07:09:56.246224    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e5b2eab0-6492-4ef7-830a-22a929549537"
	Oct 02 07:09:59 addons-110926 kubelet[1456]: W1002 07:09:59.470677    1456 logging.go:55] [core] [Channel #78 SubChannel #79]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Oct 02 07:10:05 addons-110926 kubelet[1456]: E1002 07:10:05.684589    1456 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Oct 02 07:10:05 addons-110926 kubelet[1456]: E1002 07:10:05.684660    1456 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89"
	Oct 02 07:10:05 addons-110926 kubelet[1456]: E1002 07:10:05.684844    1456 kuberuntime_manager.go:1449] "Unhandled Error" err="container minikube-ingress-dns start failed in pod kube-ingress-dns-minikube_kube-system(ef8b2745-553d-44a6-984e-b4ab801f79f7): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:10:05 addons-110926 kubelet[1456]: E1002 07:10:05.684908    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:10:08 addons-110926 kubelet[1456]: E1002 07:10:08.244490    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:10:11 addons-110926 kubelet[1456]: E1002 07:10:11.244624    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e5b2eab0-6492-4ef7-830a-22a929549537"
	Oct 02 07:10:21 addons-110926 kubelet[1456]: E1002 07:10:21.244216    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:10:21 addons-110926 kubelet[1456]: E1002 07:10:21.245169    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:10:22 addons-110926 kubelet[1456]: E1002 07:10:22.244161    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e5b2eab0-6492-4ef7-830a-22a929549537"
	Oct 02 07:10:29 addons-110926 kubelet[1456]: I1002 07:10:29.244140    1456 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-s68lt" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:10:35 addons-110926 kubelet[1456]: E1002 07:10:35.244169    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:10:35 addons-110926 kubelet[1456]: E1002 07:10:35.245219    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="e5b2eab0-6492-4ef7-830a-22a929549537"
	Oct 02 07:10:36 addons-110926 kubelet[1456]: E1002 07:10:36.245398    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	
	
	==> storage-provisioner [4829c9264d5b3ae1fc764ede230e33d7252374c2ec8cd6385777a58debef5783] <==
	W1002 07:10:13.551950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:15.555230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:15.562409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:17.565308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:17.569911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:19.574592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:19.582815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:21.585869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:21.592679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:23.595707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:23.600138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:25.603712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:25.610380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:27.613750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:27.618098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:29.621647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:29.626742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:31.631390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:31.635913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:33.639307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:33.644088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:35.646704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:35.651550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:37.655699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:10:37.662214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-110926 -n addons-110926
helpers_test.go:269: (dbg) Run:  kubectl --context addons-110926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-110926 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-110926 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube: exit status 1 (121.732151ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-110926/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:02:36 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hqjrl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hqjrl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-110926
	  Warning  Failed     7m48s                  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m1s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m1s (x4 over 8m2s)    kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m1s (x5 over 8m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m57s (x20 over 8m1s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m45s (x21 over 8m1s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-110926/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:56:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mn5jj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mn5jj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason              Age                   From                     Message
	  ----     ------              ----                  ----                     -------
	  Normal   Scheduled           14m                   default-scheduler        Successfully assigned default/task-pv-pod to addons-110926
	  Normal   Pulling             11m (x5 over 14m)     kubelet                  Pulling image "docker.io/nginx"
	  Warning  Failed              11m (x5 over 14m)     kubelet                  Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed              11m (x5 over 14m)     kubelet                  Error: ErrImagePull
	  Normal   BackOff             4m19s (x41 over 14m)  kubelet                  Back-off pulling image "docker.io/nginx"
	  Warning  Failed              4m19s (x41 over 14m)  kubelet                  Error: ImagePullBackOff
	  Warning  FailedAttachVolume  2m (x3 over 6m1s)     attachdetach-controller  AttachVolume.Attach failed for volume "pvc-bb2303e2-47d1-47e9-8c8a-acdb97dfb33e" : timed out waiting for external-attacher of hostpath.csi.k8s.io CSI driver to attach volume e36f4711-9f5c-11f0-8a1b-0a25e5203736
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-km9d9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-km9d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  4m34s  default-scheduler  running PreBind plugin "VolumeBinding": binding volumes: context deadline exceeded
	  Warning  FailedScheduling  4m32s  default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. not found

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lw8gl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bq878" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-110926 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 addons disable ingress-dns --alsologtostderr -v=1: (1.044438821s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 addons disable ingress --alsologtostderr -v=1: (7.768451035s)
--- FAIL: TestAddons/parallel/Ingress (492.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (391.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1002 06:55:54.863711  813155 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1002 06:55:54.867696  813155 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1002 06:55:54.867722  813155 kapi.go:107] duration metric: took 4.023298ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.033357ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-110926 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-110926 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d44826ef-2b9b-4f5d-900f-49f95628e1f7] Pending
helpers_test.go:352: "task-pv-pod" [d44826ef-2b9b-4f5d-900f-49f95628e1f7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-110926 -n addons-110926
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-10-02 07:02:15.672189407 +0000 UTC m=+1566.559536736
addons_test.go:567: (dbg) Run:  kubectl --context addons-110926 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-110926 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-110926/192.168.49.2
Start Time:       Thu, 02 Oct 2025 06:56:15 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.28
IPs:
IP:  10.244.0.28
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mn5jj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-mn5jj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/task-pv-pod to addons-110926
Normal   Pulling    3m3s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m3s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m3s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    55s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     55s (x21 over 5m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:567: (dbg) Run:  kubectl --context addons-110926 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-110926 logs task-pv-pod -n default: exit status 1 (102.718772ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-110926 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-110926
helpers_test.go:243: (dbg) docker inspect addons-110926:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	        "Created": "2025-10-02T06:36:47.077600034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 814321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:36:47.138474038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hosts",
	        "LogPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d-json.log",
	        "Name": "/addons-110926",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-110926:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-110926",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	                "LowerDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-110926",
	                "Source": "/var/lib/docker/volumes/addons-110926/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-110926",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-110926",
	                "name.minikube.sigs.k8s.io": "addons-110926",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e03dfd9e44981225a70f6640c6b12a48805938cfdd54b566df7bddffa824b2d",
	            "SandboxKey": "/var/run/docker/netns/6e03dfd9e449",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-110926": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:3c:a1:2d:84:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c2d471fc3c60a7f5a83ca737cf0a22c0c0076227d91a7e348867826280521af7",
	                    "EndpointID": "885b90e051ad80837eb5c6d3c161821bbf8a3c111f24b170e0bc233d0690c448",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-110926",
	                        "e88a06110ea1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-110926 -n addons-110926
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 logs -n 25: (1.298266792s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-492765 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-492765                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ -o=json --download-only -p download-only-547243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-547243                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-492765                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-547243                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p download-docker-533728 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p download-docker-533728                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p binary-mirror-704812 --alsologtostderr --binary-mirror http://127.0.0.1:37961 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p binary-mirror-704812                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ addons  │ enable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ start   │ -p addons-110926 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:43 UTC │
	│ addons  │ addons-110926 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ ip      │ addons-110926 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │ 02 Oct 25 06:56 UTC │
	│ addons  │ addons-110926 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ addons-110926 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons  │ enable headlamp -p addons-110926 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:36:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:36:21.580334  813918 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:36:21.580482  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580492  813918 out.go:374] Setting ErrFile to fd 2...
	I1002 06:36:21.580497  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580834  813918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 06:36:21.581311  813918 out.go:368] Setting JSON to false
	I1002 06:36:21.582265  813918 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22731,"bootTime":1759364251,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 06:36:21.582336  813918 start.go:140] virtualization:  
	I1002 06:36:21.585831  813918 out.go:179] * [addons-110926] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:36:21.589067  813918 notify.go:220] Checking for updates...
	I1002 06:36:21.589658  813918 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:36:21.592579  813918 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:36:21.595634  813918 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:36:21.598400  813918 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 06:36:21.601243  813918 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:36:21.604214  813918 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:36:21.607495  813918 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:36:21.629855  813918 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:36:21.629989  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.693096  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.683464105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.693212  813918 docker.go:318] overlay module found
	I1002 06:36:21.698158  813918 out.go:179] * Using the docker driver based on user configuration
	I1002 06:36:21.700959  813918 start.go:304] selected driver: docker
	I1002 06:36:21.700986  813918 start.go:924] validating driver "docker" against <nil>
	I1002 06:36:21.701000  813918 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:36:21.701711  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.758634  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.749346343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.758811  813918 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:36:21.759085  813918 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:36:21.762043  813918 out.go:179] * Using Docker driver with root privileges
	I1002 06:36:21.764916  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:21.764987  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:21.765005  813918 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:36:21.765078  813918 start.go:348] cluster config:
	{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:21.768148  813918 out.go:179] * Starting "addons-110926" primary control-plane node in "addons-110926" cluster
	I1002 06:36:21.771007  813918 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 06:36:21.773962  813918 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:36:21.776817  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:21.776869  813918 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 06:36:21.776883  813918 cache.go:58] Caching tarball of preloaded images
	I1002 06:36:21.776920  813918 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:36:21.776978  813918 preload.go:233] Found /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 06:36:21.776988  813918 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 06:36:21.777328  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:21.777357  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json: {Name:mk2f8f9458f5bc5a3d522cc7bc03c497073f8f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:21.792651  813918 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:36:21.792805  813918 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:36:21.792830  813918 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:36:21.792839  813918 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:36:21.792848  813918 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:36:21.792856  813918 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 06:36:39.840628  813918 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 06:36:39.840677  813918 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:36:39.840706  813918 start.go:360] acquireMachinesLock for addons-110926: {Name:mk5b3ba2eb8943c76c6ef867a9f0efe000290e8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:36:39.840853  813918 start.go:364] duration metric: took 124.262µs to acquireMachinesLock for "addons-110926"
	I1002 06:36:39.840884  813918 start.go:93] Provisioning new machine with config: &{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:36:39.840959  813918 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:36:39.844345  813918 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 06:36:39.844567  813918 start.go:159] libmachine.API.Create for "addons-110926" (driver="docker")
	I1002 06:36:39.844615  813918 client.go:168] LocalClient.Create starting
	I1002 06:36:39.844744  813918 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem
	I1002 06:36:40.158293  813918 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem
	I1002 06:36:40.423695  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:36:40.439045  813918 cli_runner.go:211] docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:36:40.439144  813918 network_create.go:284] running [docker network inspect addons-110926] to gather additional debugging logs...
	I1002 06:36:40.439166  813918 cli_runner.go:164] Run: docker network inspect addons-110926
	W1002 06:36:40.454853  813918 cli_runner.go:211] docker network inspect addons-110926 returned with exit code 1
	I1002 06:36:40.454885  813918 network_create.go:287] error running [docker network inspect addons-110926]: docker network inspect addons-110926: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-110926 not found
	I1002 06:36:40.454900  813918 network_create.go:289] output of [docker network inspect addons-110926]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-110926 not found
	
	** /stderr **
	I1002 06:36:40.454994  813918 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:40.471187  813918 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3c190}
	I1002 06:36:40.471239  813918 network_create.go:124] attempt to create docker network addons-110926 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:36:40.471291  813918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-110926 addons-110926
	I1002 06:36:40.528426  813918 network_create.go:108] docker network addons-110926 192.168.49.0/24 created
	I1002 06:36:40.528461  813918 kic.go:121] calculated static IP "192.168.49.2" for the "addons-110926" container
	I1002 06:36:40.528550  813918 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:36:40.544507  813918 cli_runner.go:164] Run: docker volume create addons-110926 --label name.minikube.sigs.k8s.io=addons-110926 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:36:40.560870  813918 oci.go:103] Successfully created a docker volume addons-110926
	I1002 06:36:40.560961  813918 cli_runner.go:164] Run: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:36:42.684275  813918 cli_runner.go:217] Completed: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.123276184s)
	I1002 06:36:42.684309  813918 oci.go:107] Successfully prepared a docker volume addons-110926
	I1002 06:36:42.684338  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:42.684360  813918 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:36:42.684441  813918 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:36:47.011851  813918 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.327364513s)
	I1002 06:36:47.011897  813918 kic.go:203] duration metric: took 4.327533581s to extract preloaded images to volume ...
	W1002 06:36:47.012040  813918 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 06:36:47.012157  813918 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:36:47.062619  813918 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-110926 --name addons-110926 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-110926 --network addons-110926 --ip 192.168.49.2 --volume addons-110926:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:36:47.379291  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Running}}
	I1002 06:36:47.400798  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.426150  813918 cli_runner.go:164] Run: docker exec addons-110926 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:36:47.477926  813918 oci.go:144] the created container "addons-110926" has a running status.
	I1002 06:36:47.477953  813918 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa...
	I1002 06:36:47.781138  813918 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:36:47.806163  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.827180  813918 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:36:47.827199  813918 kic_runner.go:114] Args: [docker exec --privileged addons-110926 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:36:47.891791  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.911592  813918 machine.go:93] provisionDockerMachine start ...
	I1002 06:36:47.911695  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:47.930991  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:47.931327  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:47.931345  813918 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:36:47.931960  813918 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57194->127.0.0.1:33863: read: connection reset by peer
	I1002 06:36:51.072477  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.072569  813918 ubuntu.go:182] provisioning hostname "addons-110926"
	I1002 06:36:51.072685  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.090401  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.090720  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.090740  813918 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-110926 && echo "addons-110926" | sudo tee /etc/hostname
	I1002 06:36:51.236050  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.236138  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.258063  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.258373  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.258395  813918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-110926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-110926/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-110926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:36:51.388860  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:36:51.388887  813918 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-811293/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-811293/.minikube}
	I1002 06:36:51.388910  813918 ubuntu.go:190] setting up certificates
	I1002 06:36:51.388920  813918 provision.go:84] configureAuth start
	I1002 06:36:51.388983  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.405357  813918 provision.go:143] copyHostCerts
	I1002 06:36:51.405461  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem (1123 bytes)
	I1002 06:36:51.405586  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem (1679 bytes)
	I1002 06:36:51.405650  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem (1078 bytes)
	I1002 06:36:51.405711  813918 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem org=jenkins.addons-110926 san=[127.0.0.1 192.168.49.2 addons-110926 localhost minikube]
	I1002 06:36:51.612527  813918 provision.go:177] copyRemoteCerts
	I1002 06:36:51.612597  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:36:51.612649  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.629460  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.725298  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:36:51.743050  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:36:51.760643  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:36:51.777747  813918 provision.go:87] duration metric: took 388.803174ms to configureAuth
	I1002 06:36:51.777772  813918 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:36:51.777954  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:36:51.777961  813918 machine.go:96] duration metric: took 3.866353513s to provisionDockerMachine
	I1002 06:36:51.777968  813918 client.go:171] duration metric: took 11.933342699s to LocalClient.Create
	I1002 06:36:51.777991  813918 start.go:167] duration metric: took 11.933425856s to libmachine.API.Create "addons-110926"
	I1002 06:36:51.778000  813918 start.go:293] postStartSetup for "addons-110926" (driver="docker")
	I1002 06:36:51.778009  813918 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:36:51.778057  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:36:51.778100  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.794568  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.888438  813918 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:36:51.891559  813918 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:36:51.891587  813918 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:36:51.891598  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/addons for local assets ...
	I1002 06:36:51.891662  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/files for local assets ...
	I1002 06:36:51.891684  813918 start.go:296] duration metric: took 113.678581ms for postStartSetup
	I1002 06:36:51.891998  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.908094  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:51.908374  813918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:36:51.908417  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.924432  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.017816  813918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:36:52.022845  813918 start.go:128] duration metric: took 12.181870526s to createHost
	I1002 06:36:52.022873  813918 start.go:83] releasing machines lock for "addons-110926", held for 12.182006857s
	I1002 06:36:52.022950  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:52.040319  813918 ssh_runner.go:195] Run: cat /version.json
	I1002 06:36:52.040381  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.040643  813918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:36:52.040709  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.064673  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.078579  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.168362  813918 ssh_runner.go:195] Run: systemctl --version
	I1002 06:36:52.263150  813918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:36:52.267928  813918 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:36:52.267998  813918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:36:52.294529  813918 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 06:36:52.294574  813918 start.go:495] detecting cgroup driver to use...
	I1002 06:36:52.294607  813918 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 06:36:52.294670  813918 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 06:36:52.309592  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 06:36:52.322252  813918 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:36:52.322343  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:36:52.339306  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:36:52.357601  813918 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:36:52.498437  813918 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:36:52.636139  813918 docker.go:234] disabling docker service ...
	I1002 06:36:52.636222  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:36:52.659149  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:36:52.672149  813918 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:36:52.790045  813918 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:36:52.904510  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:36:52.917512  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:36:52.931680  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 06:36:52.940606  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 06:36:52.949651  813918 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 06:36:52.949722  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 06:36:52.958437  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.967122  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 06:36:52.975524  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.984274  813918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:36:52.992118  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 06:36:53.000891  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 06:36:53.011203  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 06:36:53.020137  813918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:36:53.027434  813918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:36:53.034538  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.146732  813918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 06:36:53.259109  813918 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 06:36:53.259213  813918 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 06:36:53.262865  813918 start.go:563] Will wait 60s for crictl version
	I1002 06:36:53.262951  813918 ssh_runner.go:195] Run: which crictl
	I1002 06:36:53.266209  813918 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:36:53.294330  813918 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 06:36:53.294471  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.317070  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.342544  813918 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 06:36:53.345439  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:53.361595  813918 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:36:53.365182  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.374561  813918 kubeadm.go:883] updating cluster {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:36:53.374681  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:53.374737  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.399251  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.399274  813918 containerd.go:534] Images already preloaded, skipping extraction
	I1002 06:36:53.399339  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.423479  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.423504  813918 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:36:53.423513  813918 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1002 06:36:53.423602  813918 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-110926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:36:53.423672  813918 ssh_runner.go:195] Run: sudo crictl info
	I1002 06:36:53.448450  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:53.448474  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:53.448496  813918 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:36:53.448523  813918 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-110926 NodeName:addons-110926 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:36:53.448665  813918 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-110926"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:36:53.448861  813918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:36:53.457671  813918 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:36:53.457745  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:36:53.466514  813918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1002 06:36:53.480222  813918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:36:53.492979  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1002 06:36:53.506618  813918 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:36:53.510443  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.519937  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.633003  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:36:53.653268  813918 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926 for IP: 192.168.49.2
	I1002 06:36:53.653291  813918 certs.go:195] generating shared ca certs ...
	I1002 06:36:53.653331  813918 certs.go:227] acquiring lock for ca certs: {Name:mk33b75296d4c02eee9bab3e9582ce8896a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:53.654149  813918 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key
	I1002 06:36:54.554249  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt ...
	I1002 06:36:54.554277  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt: {Name:mk2139057332209b98dbb746fb9a256d2b754164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554459  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key ...
	I1002 06:36:54.554470  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key: {Name:mkcae11ed523222e33231ecbd86e12b64a288b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554546  813918 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key
	I1002 06:36:54.895364  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt ...
	I1002 06:36:54.895399  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt: {Name:mke2bb76dd7b81d2d26af5e116b652209f0542b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.895600  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key ...
	I1002 06:36:54.895614  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key: {Name:mkc32897a4730ab5fb973fb69d1a38ca87d85c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.896344  813918 certs.go:257] generating profile certs ...
	I1002 06:36:54.896423  813918 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key
	I1002 06:36:54.896442  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt with IP's: []
	I1002 06:36:55.419216  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt ...
	I1002 06:36:55.419259  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: {Name:mk10e15791cbf0b0edd868b4fdb8e230e5e309e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419452  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key ...
	I1002 06:36:55.419466  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key: {Name:mk9f0a92cebc1827b3a9e95b7f53c1d4b6a59638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419563  813918 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549
	I1002 06:36:55.419584  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:36:55.722878  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 ...
	I1002 06:36:55.722908  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549: {Name:mk85eea21d417032742d45805e5f307e924f0055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.723654  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 ...
	I1002 06:36:55.723671  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549: {Name:mkf298fb25e09f690a5e28cc66f4a6b37f67e15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.724361  813918 certs.go:382] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt
	I1002 06:36:55.724446  813918 certs.go:386] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key
	I1002 06:36:55.724499  813918 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key
	I1002 06:36:55.724522  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt with IP's: []
	I1002 06:36:56.363048  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt ...
	I1002 06:36:56.363081  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt: {Name:mk4c25ab58ebf52954efb245b3c0c0d9e1c6bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.363911  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key ...
	I1002 06:36:56.363932  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key: {Name:mk7f28565479e9a862d5049acbcab89444bf5a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.364713  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:36:56.364779  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:36:56.364814  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:36:56.364842  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem (1679 bytes)
	I1002 06:36:56.365421  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:36:56.384138  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 06:36:56.402907  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:36:56.420429  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:36:56.438118  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:36:56.455787  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:36:56.473374  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:36:56.490901  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:36:56.509097  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:36:56.526744  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:36:56.539426  813918 ssh_runner.go:195] Run: openssl version
	I1002 06:36:56.545473  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:36:56.553848  813918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557589  813918 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557674  813918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.599790  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:36:56.608153  813918 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:36:56.611552  813918 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:36:56.611600  813918 kubeadm.go:400] StartCluster: {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:56.611680  813918 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 06:36:56.611736  813918 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:36:56.639982  813918 cri.go:89] found id: ""
	I1002 06:36:56.640052  813918 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:36:56.647729  813918 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:36:56.655474  813918 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:36:56.655568  813918 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:36:56.663121  813918 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:36:56.663142  813918 kubeadm.go:157] found existing configuration files:
	
	I1002 06:36:56.663221  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:36:56.670874  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:36:56.670972  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:36:56.678534  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:36:56.685938  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:36:56.685996  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:36:56.692708  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.699925  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:36:56.700015  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.707153  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:36:56.714621  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:36:56.714749  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:36:56.722338  813918 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:36:56.759248  813918 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:36:56.759571  813918 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:36:56.790582  813918 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:36:56.790657  813918 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 06:36:56.790699  813918 kubeadm.go:318] OS: Linux
	I1002 06:36:56.790763  813918 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:36:56.790820  813918 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 06:36:56.790875  813918 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:36:56.790936  813918 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:36:56.790994  813918 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:36:56.791049  813918 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:36:56.791100  813918 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:36:56.791153  813918 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:36:56.791207  813918 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 06:36:56.880850  813918 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:36:56.880966  813918 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:36:56.881067  813918 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:36:56.886790  813918 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:36:56.890544  813918 out.go:252]   - Generating certificates and keys ...
	I1002 06:36:56.890681  813918 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:36:56.890776  813918 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:36:57.277686  813918 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:36:57.698690  813918 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:36:58.123771  813918 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:36:58.316428  813918 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:36:58.712844  813918 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:36:58.713106  813918 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.412304  813918 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:36:59.412590  813918 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.506243  813918 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:37:00.458571  813918 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:37:00.702742  813918 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:37:00.703124  813918 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:37:01.245158  813918 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:37:01.470802  813918 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:37:01.723353  813918 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:37:01.786251  813918 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:37:02.286866  813918 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:37:02.287602  813918 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:37:02.290493  813918 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:37:02.293946  813918 out.go:252]   - Booting up control plane ...
	I1002 06:37:02.294063  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:37:02.294988  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:37:02.295992  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:37:02.312503  813918 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:37:02.312871  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:37:02.320595  813918 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:37:02.321016  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:37:02.321262  813918 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:37:02.457350  813918 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:37:02.457522  813918 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:37:03.461255  813918 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00198836s
	I1002 06:37:03.463308  813918 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:37:03.463532  813918 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:37:03.463645  813918 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:37:03.464191  813918 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:37:06.566691  813918 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.102303507s
	I1002 06:37:08.316492  813918 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.851816452s
	I1002 06:37:09.465139  813918 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001507743s
	I1002 06:37:09.489317  813918 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:37:09.522458  813918 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:37:09.556453  813918 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:37:09.556687  813918 kubeadm.go:318] [mark-control-plane] Marking the node addons-110926 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:37:09.572399  813918 kubeadm.go:318] [bootstrap-token] Using token: 7g41rx.fb6mqimdeeyoknq9
	I1002 06:37:09.575450  813918 out.go:252]   - Configuring RBAC rules ...
	I1002 06:37:09.575583  813918 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:37:09.580181  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:37:09.588090  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:37:09.592801  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:37:09.600582  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:37:09.607878  813918 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:37:09.872917  813918 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:37:10.299814  813918 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:37:10.872732  813918 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:37:10.874055  813918 kubeadm.go:318] 
	I1002 06:37:10.874135  813918 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:37:10.874146  813918 kubeadm.go:318] 
	I1002 06:37:10.874227  813918 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:37:10.874248  813918 kubeadm.go:318] 
	I1002 06:37:10.874283  813918 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:37:10.874350  813918 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:37:10.874409  813918 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:37:10.874417  813918 kubeadm.go:318] 
	I1002 06:37:10.874473  813918 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:37:10.874482  813918 kubeadm.go:318] 
	I1002 06:37:10.874532  813918 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:37:10.874540  813918 kubeadm.go:318] 
	I1002 06:37:10.874595  813918 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:37:10.874679  813918 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:37:10.874756  813918 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:37:10.874764  813918 kubeadm.go:318] 
	I1002 06:37:10.874852  813918 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:37:10.874936  813918 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:37:10.874945  813918 kubeadm.go:318] 
	I1002 06:37:10.875033  813918 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875146  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 \
	I1002 06:37:10.875172  813918 kubeadm.go:318] 	--control-plane 
	I1002 06:37:10.875181  813918 kubeadm.go:318] 
	I1002 06:37:10.875270  813918 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:37:10.875279  813918 kubeadm.go:318] 
	I1002 06:37:10.875365  813918 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875475  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 
	I1002 06:37:10.878324  813918 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 06:37:10.878562  813918 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 06:37:10.878676  813918 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:37:10.878697  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:37:10.878705  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:37:10.881877  813918 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 06:37:10.884817  813918 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 06:37:10.889466  813918 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 06:37:10.889488  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 06:37:10.902465  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 06:37:11.181141  813918 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:37:11.181229  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.181309  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-110926 minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-110926 minikube.k8s.io/primary=true
	I1002 06:37:11.362613  813918 ops.go:34] apiserver oom_adj: -16
	I1002 06:37:11.362717  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.863387  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.363462  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.863468  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.362840  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.863815  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.363244  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.495136  813918 kubeadm.go:1113] duration metric: took 3.313961954s to wait for elevateKubeSystemPrivileges
	I1002 06:37:14.495171  813918 kubeadm.go:402] duration metric: took 17.883574483s to StartCluster
	I1002 06:37:14.495189  813918 settings.go:142] acquiring lock: {Name:mkfabb257d5e6dc89516b7f3eecfb5ad470245b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.495908  813918 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:37:14.496318  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/kubeconfig: {Name:mk61b1a16c6c070d43ba1e4fed7f7f8861077db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.497144  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:37:14.497165  813918 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:37:14.497416  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.497447  813918 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:37:14.497542  813918 addons.go:69] Setting yakd=true in profile "addons-110926"
	I1002 06:37:14.497556  813918 addons.go:238] Setting addon yakd=true in "addons-110926"
	I1002 06:37:14.497579  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.497665  813918 addons.go:69] Setting inspektor-gadget=true in profile "addons-110926"
	I1002 06:37:14.497681  813918 addons.go:238] Setting addon inspektor-gadget=true in "addons-110926"
	I1002 06:37:14.497701  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.498032  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498105  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498760  813918 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.498784  813918 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-110926"
	I1002 06:37:14.498819  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.499233  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.504834  813918 addons.go:69] Setting metrics-server=true in profile "addons-110926"
	I1002 06:37:14.504923  813918 addons.go:238] Setting addon metrics-server=true in "addons-110926"
	I1002 06:37:14.504988  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.505608  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.507518  813918 out.go:179] * Verifying Kubernetes components...
	I1002 06:37:14.507725  813918 addons.go:69] Setting cloud-spanner=true in profile "addons-110926"
	I1002 06:37:14.507753  813918 addons.go:238] Setting addon cloud-spanner=true in "addons-110926"
	I1002 06:37:14.507795  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.508276  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.519123  813918 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.519204  813918 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-110926"
	I1002 06:37:14.519258  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523209  813918 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-110926"
	I1002 06:37:14.523335  813918 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:14.523396  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523909  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.524419  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.536906  813918 addons.go:69] Setting registry=true in profile "addons-110926"
	I1002 06:37:14.536941  813918 addons.go:238] Setting addon registry=true in "addons-110926"
	I1002 06:37:14.536983  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.537475  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.539289  813918 addons.go:69] Setting default-storageclass=true in profile "addons-110926"
	I1002 06:37:14.558568  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-110926"
	I1002 06:37:14.559019  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.559239  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:14.541208  813918 addons.go:69] Setting registry-creds=true in profile "addons-110926"
	I1002 06:37:14.561178  813918 addons.go:238] Setting addon registry-creds=true in "addons-110926"
	I1002 06:37:14.561363  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.541231  813918 addons.go:69] Setting storage-provisioner=true in profile "addons-110926"
	I1002 06:37:14.563047  813918 addons.go:238] Setting addon storage-provisioner=true in "addons-110926"
	I1002 06:37:14.563932  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.566547  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541239  813918 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-110926"
	I1002 06:37:14.579820  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-110926"
	I1002 06:37:14.580221  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.586764  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541246  813918 addons.go:69] Setting volcano=true in profile "addons-110926"
	I1002 06:37:14.607872  813918 addons.go:238] Setting addon volcano=true in "addons-110926"
	I1002 06:37:14.541349  813918 addons.go:69] Setting volumesnapshots=true in profile "addons-110926"
	I1002 06:37:14.607929  813918 addons.go:238] Setting addon volumesnapshots=true in "addons-110926"
	I1002 06:37:14.607950  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.556898  813918 addons.go:69] Setting gcp-auth=true in profile "addons-110926"
	I1002 06:37:14.624993  813918 mustload.go:65] Loading cluster: addons-110926
	I1002 06:37:14.625253  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.625626  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556924  813918 addons.go:69] Setting ingress=true in profile "addons-110926"
	I1002 06:37:14.631873  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.632366  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556929  813918 addons.go:69] Setting ingress-dns=true in profile "addons-110926"
	I1002 06:37:14.632643  813918 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:37:14.631728  813918 addons.go:238] Setting addon ingress=true in "addons-110926"
	I1002 06:37:14.633388  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.633841  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.650708  813918 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:37:14.654882  813918 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:37:14.654909  813918 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:37:14.654981  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.659338  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.671893  813918 addons.go:238] Setting addon ingress-dns=true in "addons-110926"
	I1002 06:37:14.671956  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.672451  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.681943  813918 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:37:14.682145  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:37:14.682171  813918 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:37:14.682243  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.730779  813918 addons.go:238] Setting addon default-storageclass=true in "addons-110926"
	I1002 06:37:14.730824  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.731463  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.736081  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:37:14.743901  813918 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:14.748859  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:37:14.749029  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.798861  813918 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:37:14.801456  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:37:14.801501  813918 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:37:14.801637  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.840051  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:37:14.844935  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:37:14.848913  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:37:14.851733  813918 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:37:14.854520  813918 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:37:14.857638  813918 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:37:14.858717  813918 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:14.858738  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:37:14.858817  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.860526  813918 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:37:14.860546  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:37:14.860632  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.893874  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:37:14.894058  813918 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:37:14.897434  813918 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:14.897458  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:37:14.897547  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.918428  813918 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-110926"
	I1002 06:37:14.918472  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.918875  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.921121  813918 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:37:14.925950  813918 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:14.925974  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:37:14.926042  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.945293  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.949541  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:37:14.956438  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:37:14.957575  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.966829  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:37:14.967843  813918 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:37:14.983357  813918 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 06:37:14.991256  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:37:14.991531  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:37:14.991690  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.992663  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:14.992678  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:37:14.992742  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.996512  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:14.996904  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:37:14.996921  813918 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:37:14.996989  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.005391  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:37:15.005812  813918 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:37:15.006640  813918 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:15.006661  813918 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:37:15.006739  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.008284  813918 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 06:37:15.009342  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:37:15.009438  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:37:15.009541  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.028005  813918 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 06:37:15.033152  813918 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:15.033183  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 06:37:15.033275  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.054617  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.055541  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:15.055750  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:15.055763  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:37:15.055832  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.061085  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.061106  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:37:15.061173  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.074564  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.081642  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.111200  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.136860  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:15.148801  813918 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:37:15.151741  813918 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:37:15.156261  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:15.156284  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:37:15.156355  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.169924  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.193516  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.199715  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.214370  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237018  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237601  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.243930  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.244071  813918 retry.go:31] will retry after 305.561491ms: ssh: handshake failed: EOF
	I1002 06:37:15.251932  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.255879  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.259811  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.259836  813918 retry.go:31] will retry after 210.072349ms: ssh: handshake failed: EOF
	I1002 06:37:15.265683  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.272079  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.565323  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.565348  813918 retry.go:31] will retry after 243.153386ms: ssh: handshake failed: EOF
	I1002 06:37:15.846286  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:37:15.846311  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:37:15.944527  813918 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:15.944599  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:37:15.970354  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.985885  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:16.012665  813918 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:37:16.012693  813918 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:37:16.019458  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:37:16.019485  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:37:16.043516  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:37:16.043539  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:37:16.060218  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:16.072624  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:16.090843  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:16.096286  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:37:16.096364  813918 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:37:16.184119  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:16.205029  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:16.206409  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:16.211099  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:16.221140  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:16.281478  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:37:16.281550  813918 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:37:16.294235  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:37:16.294308  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:37:16.314044  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:37:16.314122  813918 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:37:16.314878  813918 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.314923  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:37:16.334271  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.435552  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:37:16.435625  813918 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:37:16.486137  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:16.508790  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.508817  813918 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:37:16.527074  813918 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.79094086s)
	I1002 06:37:16.527103  813918 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 06:37:16.527172  813918 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.390287567s)
	I1002 06:37:16.527930  813918 node_ready.go:35] waiting up to 6m0s for node "addons-110926" to be "Ready" ...
	I1002 06:37:16.692302  813918 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.692321  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:37:16.739744  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:37:16.739768  813918 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:37:16.803024  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.866551  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.918292  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:16.918317  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:37:16.976907  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:37:16.976934  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:37:17.032696  813918 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-110926" context rescaled to 1 replicas
	I1002 06:37:17.174089  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:37:17.174115  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:37:17.194531  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:17.590550  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:37:17.590575  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:37:17.985718  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:37:17.985751  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:37:18.258016  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:37:18.258042  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:37:18.426273  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:37:18.426298  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	W1002 06:37:18.558468  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:18.892311  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:37:18.892338  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:37:19.094159  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:37:19.094182  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:37:19.262380  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:37:19.262404  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:37:19.445644  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:37:19.445669  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:37:19.720946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 06:37:21.041084  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:21.578538  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.608100964s)
	I1002 06:37:21.578618  813918 addons.go:479] Verifying addon ingress=true in "addons-110926"
	I1002 06:37:21.579021  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.5930618s)
	I1002 06:37:21.579193  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.518951153s)
	I1002 06:37:21.579261  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.506611096s)
	I1002 06:37:21.582085  813918 out.go:179] * Verifying ingress addon...
	I1002 06:37:21.586543  813918 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:37:21.655191  813918 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:37:21.655263  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.115015  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.583411  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:37:22.583564  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.610354  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:22.612089  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.737638  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:37:22.767377  813918 addons.go:238] Setting addon gcp-auth=true in "addons-110926"
	I1002 06:37:22.767434  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:22.767894  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:22.793827  813918 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:37:22.793887  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.830306  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:23.096079  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:23.101826  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.167688  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (7.0767591s)
	I1002 06:37:23.167794  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.983606029s)
	W1002 06:37:23.167817  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167835  813918 retry.go:31] will retry after 146.597414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167865  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.9627765s)
	I1002 06:37:23.167924  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.96145652s)
	I1002 06:37:23.167989  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.956824802s)
	I1002 06:37:23.168168  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.946960517s)
	I1002 06:37:23.168215  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.833882657s)
	I1002 06:37:23.168229  813918 addons.go:479] Verifying addon registry=true in "addons-110926"
	I1002 06:37:23.168432  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.682270471s)
	I1002 06:37:23.168504  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.365459957s)
	I1002 06:37:23.168515  813918 addons.go:479] Verifying addon metrics-server=true in "addons-110926"
	I1002 06:37:23.168593  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.302013657s)
	W1002 06:37:23.168612  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168628  813918 retry.go:31] will retry after 145.945512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168670  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.974112429s)
	I1002 06:37:23.171600  813918 out.go:179] * Verifying registry addon...
	I1002 06:37:23.175423  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:37:23.175675  813918 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-110926 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:37:23.215812  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.494815173s)
	I1002 06:37:23.215842  813918 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:23.218592  813918 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:37:23.218725  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:23.222422  813918 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:37:23.223098  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:37:23.225306  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:37:23.225336  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:37:23.265230  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:37:23.265257  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:37:23.271284  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:23.271303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.301079  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.301100  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:37:23.315262  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:23.315479  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:23.362438  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:23.362461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:23.371447  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.590215  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.690482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.726143  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.091791  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.192769  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.240956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.605709  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.703226  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.726522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.936549  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.621028233s)
	I1002 06:37:24.936718  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.621420893s)
	W1002 06:37:24.936789  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936837  813918 retry.go:31] will retry after 561.608809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936908  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.565434855s)
	I1002 06:37:24.939978  813918 addons.go:479] Verifying addon gcp-auth=true in "addons-110926"
	I1002 06:37:24.944986  813918 out.go:179] * Verifying gcp-auth addon...
	I1002 06:37:24.948596  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:37:24.951413  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:37:24.951434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.090748  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.178550  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.226439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.452219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.499574  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:25.531518  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:25.589865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.726612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.951542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.090750  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.179030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.226732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:26.317076  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.317226  813918 retry.go:31] will retry after 583.727209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.452148  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.589788  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.683078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.727068  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:26.901144  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:26.952896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:27.091613  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:27.226561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.451348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:27.531649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:27.591525  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.683031  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:27.712297  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.712326  813918 retry.go:31] will retry after 648.169313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.726104  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.090463  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.191332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.226482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.360900  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:28.452621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.684494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.726619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.952459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.090817  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.180514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:29.185770  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.185799  813918 retry.go:31] will retry after 638.486695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.226864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.451636  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.589804  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.683512  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:29.726574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.824932  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:29.952114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:30.032649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:30.090885  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.179094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:30.226154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.452508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:30.592222  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.684732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:30.698805  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.698840  813918 retry.go:31] will retry after 1.386655025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.726921  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.951637  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.090673  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.178664  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.226447  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.590331  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.726337  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.952229  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:32.086627  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:32.090653  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.179238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.226721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:32.452452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:32.530986  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:32.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.683805  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:32.884199  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.884242  813918 retry.go:31] will retry after 1.764941661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.090182  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.226874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.451508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.590092  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.725974  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.951836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:34.090782  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.178819  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.226525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.452486  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:34.531295  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:34.590650  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.649946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:34.686870  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.726748  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.952390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:35.093119  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.179530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.226048  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.451917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:35.484501  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.484530  813918 retry.go:31] will retry after 6.007881753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.590705  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.683551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.726503  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.952327  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.090688  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.191481  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.226150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.452471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.590726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.683932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.727072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.951909  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:37.032811  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:37.090041  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.226683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.451377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:37.590155  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.683502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.726422  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.951666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.226290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.452969  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.589891  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.678445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.726426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.951569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:39.090363  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.178554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.226682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.451688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:39.531480  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:39.589495  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.683560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.726605  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.951696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.090353  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.179467  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.226430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.451667  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.590213  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.682834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.726735  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.951452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.090424  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.178251  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.225935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.493320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:41.531920  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:41.590388  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.727080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.951832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.097513  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.180007  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.228335  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:42.397373  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.397404  813918 retry.go:31] will retry after 6.331757331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.452908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.590432  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.683443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.726508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:42.952318  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.090165  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.178978  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.225896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.451987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.590602  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.678528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.726661  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.951424  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:44.031312  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:44.090520  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.226569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.451727  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:44.596784  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.697937  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.726640  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.951415  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.090703  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.179490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.227523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.451631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.589687  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.683601  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.727673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.951624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:46.031927  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:46.090068  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.178708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.226451  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:46.590533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.678457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.726355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.952193  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.090132  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.179505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.226590  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.451700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.590360  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.683040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.725863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.952219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:48.090642  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.178440  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.226648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.451752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:48.531666  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:48.590304  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.678358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.726321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.729320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:48.951489  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:49.091175  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.180116  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.226101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.452407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:49.530266  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.530298  813918 retry.go:31] will retry after 12.414314859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.590599  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.683495  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.726800  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.951645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.090598  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.178639  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.226627  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.451589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.590544  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.682812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.726927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.951882  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:51.030659  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:51.089892  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.179276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.225934  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:51.589726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.683005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.725957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.951996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.091773  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.178278  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.226119  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.451977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.590251  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.683413  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.726061  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.952248  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:53.031163  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:53.090127  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.178995  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.227062  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.452030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:53.590043  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.683319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.726034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.951951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.090498  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.178558  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.226461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.451500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.590406  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.683724  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.726962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.952006  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:55.031442  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:55.091214  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.179018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.225804  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:55.590030  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.682894  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.726632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.951851  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.090254  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.179316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.225963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.451980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.589903  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.726710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.969890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.039661  813918 node_ready.go:49] node "addons-110926" is "Ready"
	I1002 06:37:57.039759  813918 node_ready.go:38] duration metric: took 40.511800003s for node "addons-110926" to be "Ready" ...
	I1002 06:37:57.039788  813918 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:57.039875  813918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.093303  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.094841  813918 api_server.go:72] duration metric: took 42.597646349s to wait for apiserver process to appear ...
	I1002 06:37:57.094869  813918 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:37:57.094891  813918 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 06:37:57.110477  813918 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 06:37:57.112002  813918 api_server.go:141] control plane version: v1.34.1
	I1002 06:37:57.112039  813918 api_server.go:131] duration metric: took 17.162356ms to wait for apiserver health ...
	I1002 06:37:57.112050  813918 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:37:57.164751  813918 system_pods.go:59] 19 kube-system pods found
	I1002 06:37:57.164836  813918 system_pods.go:61] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.164843  813918 system_pods.go:61] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.164850  813918 system_pods.go:61] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.164855  813918 system_pods.go:61] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.164860  813918 system_pods.go:61] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.164866  813918 system_pods.go:61] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.164895  813918 system_pods.go:61] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.164906  813918 system_pods.go:61] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.164911  813918 system_pods.go:61] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.164915  813918 system_pods.go:61] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.164927  813918 system_pods.go:61] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.164931  813918 system_pods.go:61] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.164936  813918 system_pods.go:61] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.164940  813918 system_pods.go:61] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.164952  813918 system_pods.go:61] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.164956  813918 system_pods.go:61] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.164969  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.164978  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.164984  813918 system_pods.go:61] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.164996  813918 system_pods.go:74] duration metric: took 52.940352ms to wait for pod list to return data ...
	I1002 06:37:57.165020  813918 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:37:57.180144  813918 default_sa.go:45] found service account: "default"
	I1002 06:37:57.180178  813918 default_sa.go:55] duration metric: took 15.149731ms for default service account to be created ...
	I1002 06:37:57.180188  813918 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:37:57.222552  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:57.222577  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.223365  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.223410  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.223418  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.223424  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.223428  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.223442  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.223456  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.223462  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.223474  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.223481  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.223485  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.223492  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.223496  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.223503  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.223507  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.223510  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.223514  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.223521  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.223531  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.223536  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.223550  813918 retry.go:31] will retry after 203.421597ms: missing components: kube-dns
	I1002 06:37:57.317769  813918 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:57.317813  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.437762  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.437803  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.437810  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.437815  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.437821  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.437826  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.437841  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.437853  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.437869  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.437874  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.437877  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.437882  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.437900  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.437905  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.437909  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.437913  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.437926  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.437937  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437946  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437955  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.437969  813918 retry.go:31] will retry after 264.460556ms: missing components: kube-dns
	I1002 06:37:57.457586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.591211  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.684302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.707934  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.707975  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.707990  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:57.708000  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:57.708018  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.708030  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.708035  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.708040  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.708051  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.708113  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:57.708129  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.708172  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.708184  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:57.708195  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:57.708207  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:57.708220  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:57.708228  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:57.708247  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708255  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708270  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:57.708285  813918 retry.go:31] will retry after 422.985157ms: missing components: kube-dns
	I1002 06:37:57.742917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.952834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.091317  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.137271  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.137312  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:58.137322  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.137331  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.137338  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.137342  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.137350  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.137355  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.137359  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.137366  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.137375  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.137380  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.137386  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.137399  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.137411  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.137417  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.137426  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.137433  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137444  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137451  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.137467  813918 retry.go:31] will retry after 586.146569ms: missing components: kube-dns
	I1002 06:37:58.178407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.235878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.452723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.614086  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.705574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.752782  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.752871  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Running
	I1002 06:37:58.752902  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.752951  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.752984  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.753015  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.753040  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.753071  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.753100  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.753128  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.753156  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.753185  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.753215  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.753246  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.753287  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.753323  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.753344  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.753369  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753402  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753429  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.753455  813918 system_pods.go:126] duration metric: took 1.573257013s to wait for k8s-apps to be running ...
	I1002 06:37:58.753478  813918 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:37:58.753557  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:37:58.756092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.811373  813918 system_svc.go:56] duration metric: took 57.886892ms WaitForService to wait for kubelet
	I1002 06:37:58.811449  813918 kubeadm.go:586] duration metric: took 44.314256903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:58.811493  813918 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:37:58.822249  813918 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 06:37:58.822353  813918 node_conditions.go:123] node cpu capacity is 2
	I1002 06:37:58.822383  813918 node_conditions.go:105] duration metric: took 10.860686ms to run NodePressure ...
	I1002 06:37:58.822420  813918 start.go:241] waiting for startup goroutines ...
	I1002 06:37:58.952958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.090849  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.194378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.293675  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.453551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.590199  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.683743  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.727149  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.952566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.095335  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.179662  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.233910  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.456053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.590708  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.683163  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.726621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.952293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.091005  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.179669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.229085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.453177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.591279  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.686492  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.728097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.945617  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:01.952810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.090686  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.179657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.228561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:02.452023  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.591508  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.683154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.726517  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:38:02.824299  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.824331  813918 retry.go:31] will retry after 15.691806375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.952380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.090609  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.178940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.227145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.453458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.590296  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.683856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.728071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.952283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.091664  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.192092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.226458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.451525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.589908  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.730121  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.952803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.091341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.179246  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.227241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.453166  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.590701  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.678855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.729441  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.955761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.089976  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.179542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.229669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.451663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.590195  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.684205  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.784414  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.952931  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.090633  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.179271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.226645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.590940  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.683125  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.726314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.958423  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.089866  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.178562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.226685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.452416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.589770  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.683752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.726663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.952521  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.090474  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.179170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.227253  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.453357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.593377  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.684130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.728107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.951741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.090984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.181589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.227685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.590276  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.684315  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.726459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.951730  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.094349  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.181744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.226987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.452812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.589905  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.684532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.727310  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.952952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.090716  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.178859  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.227650  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.452172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.590288  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.684016  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.727454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.952912  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.089873  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.179357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.226476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.452233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.590829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.683018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.727319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.952542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.091679  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.180387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.229029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.453283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.593239  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.684343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.727726  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.951591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.090426  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.452049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.591161  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.726700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.951767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.090224  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.179552  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.230312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.452584  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.590173  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.728540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.952802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.089859  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.227103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.452592  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.589995  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.683737  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.727124  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.952069  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.090149  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.227063  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.452179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.516517  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:18.591793  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.683303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.726902  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.951881  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:19.090407  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.179390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.280453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.453053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:38:19.506255  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.506287  813918 retry.go:31] will retry after 24.46264979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.591253  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.683612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.727161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.951604  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.090820  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.179282  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.226653  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.451718  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.590946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.683133  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.726532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.952036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.090532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.179243  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.227567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.452954  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.590813  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.683988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.726704  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.955708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.090204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.179312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.226758  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.451702  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.590436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.726810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.952518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.090640  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.226432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.452557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.589536  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.726387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.951660  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.089946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.179032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.231204  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.452096  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.591481  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.684150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.727560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.951946  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.090564  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.180720  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.227767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.452182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.590552  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.683982  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.727145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.952505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.096097  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.199167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.227457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.589950  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.682877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.728464  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.952825  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.090029  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.178693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.227164  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.451877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.694494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.726681  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.953022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.090718  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.178712  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.226849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.451699  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.590634  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.680358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.727806  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.952386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.090865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.192262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.296040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.458956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.592945  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.696528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.727745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.960224  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.108669  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.181176  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.229077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.453626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.590233  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.688386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.962237  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.091531  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.180490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.229509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.452749  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.591491  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.726970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.952445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.091436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.190896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.228381  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.452736  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.590064  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.684030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.726390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.951770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.090909  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.228094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.452528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.590375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.684236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.727041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.952649  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:34.090690  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.178430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.227390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:34.452820  813918 kapi.go:107] duration metric: took 1m9.5042235s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 06:38:34.456518  813918 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-110926 cluster.
	I1002 06:38:34.459299  813918 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 06:38:34.462514  813918 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 06:38:34.590456  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.683783  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.726876  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.091815  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.192181  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.225996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.590532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.683177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.727077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.090514  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.178631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.226657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.590586  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.684420  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.726745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.090769  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.193241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.227067  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.591255  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.682734  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.727297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.089746  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.178757  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.227287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.591547  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.691271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.727108  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.106229  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.202273  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.228516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.589988  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.679442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.726895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.094511  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.179452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.237240  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.601942  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.693742  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.738619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.091045  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.191515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.226632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.591721  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.726863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.091861  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.204238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.590297  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.683271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.727579  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.091018  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.179103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.226868  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.591731  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.684032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.726500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.969756  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:44.090261  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.179366  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.228188  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:44.592341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.686940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.727784  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.092283  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.178091  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.20829608s)
	W1002 06:38:45.178208  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.178250  813918 retry.go:31] will retry after 22.26617142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.179543  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.236432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.590441  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.679320  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.727621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:46.090405  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.178426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:46.226663  813918 kapi.go:107] duration metric: took 1m23.00356106s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 06:38:46.589619  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.683261  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.089734  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.179374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.592660  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.090007  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.178644  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.591375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.683509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.089829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.591248  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.691276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.089984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.179171  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.590696  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.683346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.089635  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.590723  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.683306  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.090482  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.190696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.678787  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.090135  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.179421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.590204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.684303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.089742  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.178289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.591054  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.692841  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.091556  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.590831  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.682962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.590836  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.683198  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.090570  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.683132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.089540  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.179053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.590839  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.683962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.090850  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.190988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.590732  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.685032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.114597  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.198802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.590774  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.683043  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.090771  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.178723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.590300  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.684480  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.091506  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.180050  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.591681  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.686987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.092104  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.180518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.590550  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.684084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.091333  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.685968  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.091208  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.179114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.593116  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.693180  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.099807  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:06.192434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.591063  813918 kapi.go:107] duration metric: took 1m45.004516868s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 06:39:06.691162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.178929  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.445436  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:39:07.683258  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:08.179496  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:39:08.321958  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:39:08.322050  813918 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:39:08.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.179353  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.686227  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.179510  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.179458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.679580  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.179918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.684042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.178652  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.685874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.179294  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.688744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.178402  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.684134  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.178182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.682141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.179203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.684865  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.183409  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.683201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.178867  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.679950  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.179378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.683751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.179070  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.679127  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.178339  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.682554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.179809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.684571  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.684796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.178633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.683087  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.178740  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.683803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.178621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.679141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.178920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.684290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.179325  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.680059  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.180120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.683936  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.178444  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.683250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.684538  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.179130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.684267  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.179364  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.684136  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.178488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.683770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.179826  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.683998  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.179895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.683890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.180914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.686625  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.179680  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.684314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.178731  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.682866  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.180532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.685515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.178761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.678674  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.180677  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.683093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.682966  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.178714  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.687786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.180034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.682439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.179416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.685544  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.179302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.685100  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.179287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.683778  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.179021  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.679097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.178970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.684700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.179476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.684994  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.178796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.679165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.178666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.684967  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.178854  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.678696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.179624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.683296  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.180450  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.687218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.195539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.689354  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.178732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.685212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.179265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.678460  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.686281  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.179400  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.679175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.179017  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.683057  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.179262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.684658  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.179829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.683098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.178903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.686212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.179744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.682952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.685085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.179154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.683453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.179437  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.683490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.179250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.684690  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.179775  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.684387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.678523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.179146  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.179689  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.682903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.178772  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.685172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.178915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.684537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.178688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.681514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.179537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.683064  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.682793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.179279  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.685175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.178553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.683682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.179629  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.679433  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.178986  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.683516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.178938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.684313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.179037  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.682849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.180161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.683924  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.178283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.683997  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.179049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.685786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.179179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.682830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.179638  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.683135  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.178744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.684184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.179123  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.179078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.682970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.179304  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.684431  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.179468  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.683907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.178963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.684491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.180147  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.678812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.178520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.679177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.178790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.684374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.179855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.684397  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.179055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.685615  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.178939  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.680235  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.178829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.682679  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.179766  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.686979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.178641  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.683095  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.178582  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.682578  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.179361  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.684019  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.683211  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.180830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.685818  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.179776  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.683755  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.179597  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.683541  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.178536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.679350  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.183218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.683948  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.179617  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.681398  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.200089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.683523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.180022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.682762  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.179798  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.179630  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.683920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.178316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.686534  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.179292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.683293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.178370  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.682944  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.178545  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.685071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.179215  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.684453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.688380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.179014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.682840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.179693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.179386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.679132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.178565  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.680539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.684344  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.179591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.679368  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.178436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.683864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.180546  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.683586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.679618  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.684107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.684034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.178317  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.684041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.178322  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.683407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.179139  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.683117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.178439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.685938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.178476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.683871  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.178257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.684421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.178363  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.684075  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.178491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.684622  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.679029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.179857  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.684822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.178471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.682266  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.178454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.683741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.684238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.179255  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.685850  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.179285  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.684332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.178487  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.679840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.178710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.684329  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.179191  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.685465  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.179295  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.684802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.179488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.683626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.179090  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.683827  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.211958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.683203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.179683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.684688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.179790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.684540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.179257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.684514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.683477  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.178765  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.684151  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.179311  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.684698  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.179522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.684199  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.178816  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.683369  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.178888  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.683785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.179801  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.684918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.179419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.686564  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.179115  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.679606  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.179733  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.178170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.679142  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.679408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.685245  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.184879  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.679309  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.180107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.685443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.178916  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.682980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.178340  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.685958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.178346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.678858  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.179520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.685162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.178663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.683927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.178987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.683518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.179084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.685719  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.178949  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.179144  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.678751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.178975  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.685293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.178566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.682732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.686648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.178770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.682752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.179886  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.683072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.178408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.683343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.179005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.679908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.178619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.685331  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.179236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.683822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.179233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.684864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.684351  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.180700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.683915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.179907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.683172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.178856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.683739  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.179113  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.684228  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.178497  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.680321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.178685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.684377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.178668  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.683298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.679836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.179289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.179308  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.685527  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.179502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.682722  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.179247  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.691933  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.684101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.178537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.679390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.679292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.178807  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.679635  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.179574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.685788  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.179536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.679723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.178926  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.205259  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.678979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.178844  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.684358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.178792  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.680055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.183250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.685665  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.179382  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.179323  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.179642  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.678672  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.179393  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.688221  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.178875  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.683313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.178669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.679683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.179098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.681721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.181436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.683878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.179394  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.682260  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.179274  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.679117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.178213  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.684682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.679759  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.179473  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.683157  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.178763  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.679298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.179659  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.683416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.179914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.684427  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.178932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.684548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.179404  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.683536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.179167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.685131  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.178507  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.683442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.679774  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.179201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.679574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.683089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.178834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.684250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.178466  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.684419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.680107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.178342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.179349  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.685184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.178165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.179446  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.683832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.683553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.179562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.179009  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.684080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.682553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.179297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.683633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.176387  813918 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1002 06:43:23.176421  813918 kapi.go:107] duration metric: took 6m0.001003242s to wait for kubernetes.io/minikube-addons=registry ...
	W1002 06:43:23.176505  813918 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1002 06:43:23.179649  813918 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, default-storageclass, volcano, nvidia-device-plugin, storage-provisioner, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, gcp-auth, csi-hostpath-driver, ingress
	I1002 06:43:23.182525  813918 addons.go:514] duration metric: took 6m8.685068561s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner default-storageclass volcano nvidia-device-plugin storage-provisioner registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots gcp-auth csi-hostpath-driver ingress]
	I1002 06:43:23.182578  813918 start.go:246] waiting for cluster config update ...
	I1002 06:43:23.182605  813918 start.go:255] writing updated cluster config ...
	I1002 06:43:23.182910  813918 ssh_runner.go:195] Run: rm -f paused
	I1002 06:43:23.186967  813918 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:23.191359  813918 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.195909  813918 pod_ready.go:94] pod "coredns-66bc5c9577-s68lt" is "Ready"
	I1002 06:43:23.195939  813918 pod_ready.go:86] duration metric: took 4.553514ms for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.198221  813918 pod_ready.go:83] waiting for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.202513  813918 pod_ready.go:94] pod "etcd-addons-110926" is "Ready"
	I1002 06:43:23.202537  813918 pod_ready.go:86] duration metric: took 4.291712ms for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.204756  813918 pod_ready.go:83] waiting for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.208864  813918 pod_ready.go:94] pod "kube-apiserver-addons-110926" is "Ready"
	I1002 06:43:23.208890  813918 pod_ready.go:86] duration metric: took 4.040561ms for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.211197  813918 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.591502  813918 pod_ready.go:94] pod "kube-controller-manager-addons-110926" is "Ready"
	I1002 06:43:23.591528  813918 pod_ready.go:86] duration metric: took 380.304031ms for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.792134  813918 pod_ready.go:83] waiting for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.192193  813918 pod_ready.go:94] pod "kube-proxy-4zvzf" is "Ready"
	I1002 06:43:24.192225  813918 pod_ready.go:86] duration metric: took 400.063711ms for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.391575  813918 pod_ready.go:83] waiting for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791416  813918 pod_ready.go:94] pod "kube-scheduler-addons-110926" is "Ready"
	I1002 06:43:24.791440  813918 pod_ready.go:86] duration metric: took 399.838153ms for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791453  813918 pod_ready.go:40] duration metric: took 1.604452407s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:24.848923  813918 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 06:43:24.852286  813918 out.go:179] * Done! kubectl is now configured to use "addons-110926" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	f006bdfdbfa9c       1611cd07b61d5       6 minutes ago       Running             busybox                                  0                   396f52d70bdd3       busybox                                    default
	8db7a8fd91b3a       bc6bf68f85c70       8 minutes ago       Running             registry                                 0                   2574946f7674b       registry-66898fdd98-926mp                  kube-system
	5f7d9891cc455       5ed383cb88c34       23 minutes ago      Running             controller                               0                   6c717320771c8       ingress-nginx-controller-9cc49f96f-srz99   ingress-nginx
	0308d38377e11       ee6d597e62dc8       23 minutes ago      Running             csi-snapshotter                          0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	25fa4fdbd3104       642ded511e141       23 minutes ago      Running             csi-provisioner                          0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	fc252b8568f42       922312104da8a       23 minutes ago      Running             liveness-probe                           0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	335d72204c3f1       08f6b2990811a       23 minutes ago      Running             hostpath                                 0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	5f68f17265ee3       deda3ad36c19b       23 minutes ago      Running             gadget                                   0                   4c1a07ae3ab5b       gadget-5sxf6                               gadget
	0e5a160912072       0107d56dbc0be       23 minutes ago      Running             node-driver-registrar                    0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	739d12f7cb55c       c67c707f59d87       23 minutes ago      Exited              patch                                    0                   bb748c608a5b6       ingress-nginx-admission-patch-bq878        ingress-nginx
	591026b1dba39       c67c707f59d87       23 minutes ago      Exited              create                                   0                   cb5a57455ef86       ingress-nginx-admission-create-lw8gl       ingress-nginx
	54cf7611bdf67       bc6c1e09a843d       23 minutes ago      Running             metrics-server                           0                   7dd5efc48ed3c       metrics-server-85b7d694d7-fg8z6            kube-system
	f6cb9c538a386       4d1e5c3e97420       23 minutes ago      Running             volume-snapshot-controller               0                   67891e8bc00da       snapshot-controller-7d9fbc56b8-xwmkw       kube-system
	0c9bf13466bdb       9a80d518f102c       23 minutes ago      Running             csi-attacher                             0                   292886057d9ec       csi-hostpath-attacher-0                    kube-system
	5ad865f2d99af       7b85e0fbfd435       23 minutes ago      Running             registry-proxy                           0                   f2c6d58f83a8d       registry-proxy-bqxnl                       kube-system
	2b58aa20e457e       4d1e5c3e97420       23 minutes ago      Running             volume-snapshot-controller               0                   2f3e4307f0508       snapshot-controller-7d9fbc56b8-69zvz       kube-system
	01c56e6095ea5       487fa743e1e22       24 minutes ago      Running             csi-resizer                              0                   507c852501681       csi-hostpath-resizer-0                     kube-system
	9ba807329b10c       1461903ec4fe9       24 minutes ago      Running             csi-external-health-monitor-controller   0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	4829c9264d5b3       ba04bb24b9575       24 minutes ago      Running             storage-provisioner                      0                   cd62db6aa4ca0       storage-provisioner                        kube-system
	d607380a0ea95       138784d87c9c5       24 minutes ago      Running             coredns                                  0                   97bcb21e01196       coredns-66bc5c9577-s68lt                   kube-system
	001c4797204fc       b1a8c6f707935       25 minutes ago      Running             kindnet-cni                              0                   a8dbd581dae29       kindnet-zb4h8                              kube-system
	205ba78bdcdf4       05baa95f5142d       25 minutes ago      Running             kube-proxy                               0                   1c95f15f187e7       kube-proxy-4zvzf                           kube-system
	7d5d1641aee07       43911e833d64d       25 minutes ago      Running             kube-apiserver                           0                   111e5d5f57119       kube-apiserver-addons-110926               kube-system
	b56ea6dbe0e21       b5f57ec6b9867       25 minutes ago      Running             kube-scheduler                           0                   740338c713381       kube-scheduler-addons-110926               kube-system
	dd74ed9d21ed1       7eb2c6ff0c5a7       25 minutes ago      Running             kube-controller-manager                  0                   408527a4c051e       kube-controller-manager-addons-110926      kube-system
	8be3089b4391b       a1894772a478e       25 minutes ago      Running             etcd                                     0                   f832da367e6b5       etcd-addons-110926                         kube-system
	
	
	==> containerd <==
	Oct 02 07:01:59 addons-110926 containerd[753]: time="2025-10-02T07:01:59.575587321Z" level=info msg="PullImage \"ghcr.io/headlamp-k8s/headlamp:v0.35.0@sha256:cdbeb1dff093990ea7f3f58456bdf32dc4a163c9dc76409f2efaa036f8d86713\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.714959309Z" level=info msg="StopPodSandbox for \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.722544449Z" level=info msg="TearDown network for sandbox \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\" successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.722591184Z" level=info msg="StopPodSandbox for \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\" returns successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.723133634Z" level=info msg="RemovePodSandbox for \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.723271024Z" level=info msg="Forcibly stopping sandbox \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.730661700Z" level=info msg="TearDown network for sandbox \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\" successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.735295739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.735399539Z" level=info msg="RemovePodSandbox \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\" returns successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.735960622Z" level=info msg="StopPodSandbox for \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.743889415Z" level=info msg="TearDown network for sandbox \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\" successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.743935437Z" level=info msg="StopPodSandbox for \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\" returns successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.744617189Z" level=info msg="RemovePodSandbox for \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.744658985Z" level=info msg="Forcibly stopping sandbox \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.752440139Z" level=info msg="TearDown network for sandbox \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\" successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.757094026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.757196857Z" level=info msg="RemovePodSandbox \"28bd5150ab50be93f845531405dcf348fa7bf04a65cba525c1912f98139902ed\" returns successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.757704527Z" level=info msg="StopPodSandbox for \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.765171582Z" level=info msg="TearDown network for sandbox \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\" successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.765325702Z" level=info msg="StopPodSandbox for \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\" returns successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.765989470Z" level=info msg="RemovePodSandbox for \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.766124957Z" level=info msg="Forcibly stopping sandbox \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\""
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.773803452Z" level=info msg="TearDown network for sandbox \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\" successfully"
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.778715341Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 02 07:02:10 addons-110926 containerd[753]: time="2025-10-02T07:02:10.778812216Z" level=info msg="RemovePodSandbox \"b598bb55c93b61b8396e5bd602ec4f09e421ea306571930cfd5e93b64111f193\" returns successfully"
	
	
	==> coredns [d607380a0ea95122f5da6e25cf2168aa3ea1ff11f2efdf89f4a8c2d0e5150d23] <==
	[INFO] 10.244.0.10:50787 - 30802 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001101513s
	[INFO] 10.244.0.10:50787 - 65077 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000141051s
	[INFO] 10.244.0.10:50787 - 27995 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000170818s
	[INFO] 10.244.0.10:57105 - 40777 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000193627s
	[INFO] 10.244.0.10:57105 - 44741 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000111316s
	[INFO] 10.244.0.10:57105 - 25201 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093429s
	[INFO] 10.244.0.10:57105 - 38571 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084034s
	[INFO] 10.244.0.10:57105 - 24208 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076166s
	[INFO] 10.244.0.10:57105 - 56789 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000139811s
	[INFO] 10.244.0.10:57105 - 46307 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001361429s
	[INFO] 10.244.0.10:57105 - 10819 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.000882336s
	[INFO] 10.244.0.10:57105 - 62476 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092289s
	[INFO] 10.244.0.10:57105 - 29096 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.0000767s
	[INFO] 10.244.0.10:43890 - 1641 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123016s
	[INFO] 10.244.0.10:43890 - 1411 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136259s
	[INFO] 10.244.0.10:42249 - 55738 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014663s
	[INFO] 10.244.0.10:42249 - 56025 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119479s
	[INFO] 10.244.0.10:58600 - 45308 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118355s
	[INFO] 10.244.0.10:58600 - 45497 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012044s
	[INFO] 10.244.0.10:58816 - 38609 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0013196s
	[INFO] 10.244.0.10:58816 - 38806 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001169622s
	[INFO] 10.244.0.10:53569 - 36791 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135397s
	[INFO] 10.244.0.10:53569 - 36387 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116156s
	[INFO] 10.244.0.26:45800 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000375054s
	[INFO] 10.244.0.26:44881 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100551s
	
	
	==> describe nodes <==
	Name:               addons-110926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-110926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-110926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-110926
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-110926"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:37:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-110926
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:02:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-110926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 852f460d42254382a140bbeecb584248
	  System UUID:                c6ea63c0-97bd-4894-b738-fecc8ba127ac
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m53s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  gadget                      gadget-5sxf6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  headlamp                    headlamp-85f8f8dc54-cdxjc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-srz99    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         24m
	  kube-system                 coredns-66bc5c9577-s68lt                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     25m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 csi-hostpathplugin-mg6q4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 etcd-addons-110926                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         25m
	  kube-system                 kindnet-zb4h8                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      25m
	  kube-system                 kube-apiserver-addons-110926                250m (12%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-addons-110926       200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-4zvzf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-addons-110926                100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 metrics-server-85b7d694d7-fg8z6             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         24m
	  kube-system                 registry-66898fdd98-926mp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 registry-creds-764b6fb674-s7sx5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 registry-proxy-bqxnl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 snapshot-controller-7d9fbc56b8-69zvz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 snapshot-controller-7d9fbc56b8-xwmkw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25m                kube-proxy       
	  Normal   Starting                 25m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 25m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   Starting                 25m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 25m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  25m                kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25m                kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25m                kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           25m                node-controller  Node addons-110926 event: Registered Node addons-110926 in Controller
	  Normal   NodeReady                24m                kubelet          Node addons-110926 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8be3089b4391b68797b9ff88ff2b0c3043e3281ca30bcb48a82169b26fb4081d] <==
	{"level":"warn","ts":"2025-10-02T06:37:44.478170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.495681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.527887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.548456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.563248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.624874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.689649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.719544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.736879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.755892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.770836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.790478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:39:30.364216Z","caller":"traceutil/trace.go:172","msg":"trace[134372730] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"123.100543ms","start":"2025-10-02T06:39:30.241102Z","end":"2025-10-02T06:39:30.364202Z","steps":["trace[134372730] 'process raft request'  (duration: 122.981302ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T06:47:04.880918Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1925}
	{"level":"info","ts":"2025-10-02T06:47:04.920117Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1925,"took":"38.610713ms","hash":2612370864,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5120000,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2025-10-02T06:47:04.920180Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2612370864,"revision":1925,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T06:52:04.887964Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2405}
	{"level":"info","ts":"2025-10-02T06:52:04.907361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2405,"took":"18.449885ms","hash":1927945438,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":3727360,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-10-02T06:52:04.907428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1927945438,"revision":2405,"compact-revision":1925}
	{"level":"info","ts":"2025-10-02T06:57:04.895109Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2864}
	{"level":"info","ts":"2025-10-02T06:57:04.926419Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2864,"took":"30.706949ms","hash":805286141,"current-db-size-bytes":9138176,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":5545984,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-02T06:57:04.926487Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":805286141,"revision":2864,"compact-revision":2405}
	{"level":"info","ts":"2025-10-02T07:02:04.901785Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3652}
	{"level":"info","ts":"2025-10-02T07:02:04.924487Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3652,"took":"21.535369ms","hash":3971856220,"current-db-size-bytes":9138176,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4923392,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2025-10-02T07:02:04.924535Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3971856220,"revision":3652,"compact-revision":2864}
	
	
	==> kernel <==
	 07:02:17 up  6:44,  0 user,  load average: 0.19, 0.62, 1.18
	Linux addons-110926 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [001c4797204fc8489af667e5dc44dc2de85bde6fbbb94189af8eaa6e51b826b8] <==
	I1002 07:00:16.722968       1 main.go:301] handling current node
	I1002 07:00:26.723365       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:26.723401       1 main.go:301] handling current node
	I1002 07:00:36.725391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:36.725429       1 main.go:301] handling current node
	I1002 07:00:46.731290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:46.731393       1 main.go:301] handling current node
	I1002 07:00:56.723222       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:56.723258       1 main.go:301] handling current node
	I1002 07:01:06.722803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:06.722842       1 main.go:301] handling current node
	I1002 07:01:16.723809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:16.723848       1 main.go:301] handling current node
	I1002 07:01:26.725153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:26.725189       1 main.go:301] handling current node
	I1002 07:01:36.726013       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:36.726049       1 main.go:301] handling current node
	I1002 07:01:46.728856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:46.728892       1 main.go:301] handling current node
	I1002 07:01:56.726104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:56.726142       1 main.go:301] handling current node
	I1002 07:02:06.726315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:02:06.726523       1 main.go:301] handling current node
	I1002 07:02:16.725987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:02:16.726019       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7d5d1641aee0712674398096e96919d3b125a32fedea7425f03406a609a25f01] <==
	I1002 06:55:13.513127       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:13.858494       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": service "volcano-admission-service" not found
	I1002 06:55:13.893899       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1002 06:55:14.147786       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 06:55:14.185061       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 06:55:14.209779       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1002 06:55:14.267834       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1002 06:55:14.283374       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 06:55:14.505707       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:14.840606       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1002 06:55:14.931919       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:14.932179       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1002 06:55:15.053984       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	I1002 06:55:15.149713       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:15.287278       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1002 06:55:15.371538       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1002 06:55:15.395965       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1002 06:55:15.429512       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1002 06:55:16.150162       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1002 06:55:16.557861       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1002 06:55:34.021110       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57690: use of closed network connection
	E1002 06:55:34.271460       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57730: use of closed network connection
	E1002 06:55:34.454019       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57748: use of closed network connection
	I1002 06:57:07.469062       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:01:59.058631       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.126.237"}
	
	
	==> kube-controller-manager [dd74ed9d21ed14fc6778ffc7add04a70910ec955742f31d4442b2c07c8ea86db] <==
	E1002 07:01:28.340927       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:01:28.342755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:01:29.492591       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:01:35.147089       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:01:35.148576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:01:36.734144       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:01:36.735364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:01:37.683636       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:01:37.684852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:01:44.492680       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:01:50.958729       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:01:50.959789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:01:59.108140       1 replica_set.go:587] "Unhandled Error" err="sync \"headlamp/headlamp-85f8f8dc54\" failed with pods \"headlamp-85f8f8dc54-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found" logger="UnhandledError"
	E1002 07:01:59.493006       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:02:06.527882       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:02:06.529235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:02:07.839212       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:02:07.840512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:02:08.245787       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:02:08.247083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:02:09.638723       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:02:09.640115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:02:14.493560       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1002 07:02:15.009771       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:02:15.011367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [205ba78bdcdf484d8af0d0330d3a99ba39bdc20efa19428202c6c4cd7dfd9d33] <==
	I1002 06:37:16.426570       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:37:16.498503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:37:16.599091       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:37:16.599151       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:37:16.599225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:37:16.664219       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:37:16.664277       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:37:16.670034       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:37:16.670375       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:37:16.670399       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:37:16.671951       1 config.go:200] "Starting service config controller"
	I1002 06:37:16.671975       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:37:16.671996       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:37:16.672007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:37:16.672023       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:37:16.672032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:37:16.676259       1 config.go:309] "Starting node config controller"
	I1002 06:37:16.676302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:37:16.676311       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:37:16.772116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:37:16.772157       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:37:16.772192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b56ea6dbe0e218561ee35e4169c6c63e3160ecf828f68ed8b40ef0285f668b5e] <==
	I1002 06:37:08.294088       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:37:08.297839       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 06:37:08.298569       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:37:08.301736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 06:37:08.302088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:37:08.302287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1002 06:37:08.298598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 06:37:08.303874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:37:08.304074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:37:08.304269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:37:08.304471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:37:08.308085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 06:37:08.317169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:37:08.317571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:37:08.317827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:37:08.317882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:37:08.317917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:37:08.317998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:37:08.318060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:37:08.325459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:37:08.325531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:37:08.325571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:37:08.325620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:37:08.325676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1002 06:37:09.602936       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:01:33 addons-110926 kubelet[1456]: E1002 07:01:33.243612    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.275178    1456 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mds4d\" (UniqueName: \"kubernetes.io/projected/1e24673f-4e5a-493c-86b6-4dd1ec08fae1-kube-api-access-mds4d\") pod \"1e24673f-4e5a-493c-86b6-4dd1ec08fae1\" (UID: \"1e24673f-4e5a-493c-86b6-4dd1ec08fae1\") "
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.275240    1456 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e24673f-4e5a-493c-86b6-4dd1ec08fae1-config-volume\") pod \"1e24673f-4e5a-493c-86b6-4dd1ec08fae1\" (UID: \"1e24673f-4e5a-493c-86b6-4dd1ec08fae1\") "
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.284915    1456 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e24673f-4e5a-493c-86b6-4dd1ec08fae1-kube-api-access-mds4d" (OuterVolumeSpecName: "kube-api-access-mds4d") pod "1e24673f-4e5a-493c-86b6-4dd1ec08fae1" (UID: "1e24673f-4e5a-493c-86b6-4dd1ec08fae1"). InnerVolumeSpecName "kube-api-access-mds4d". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.287504    1456 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e24673f-4e5a-493c-86b6-4dd1ec08fae1-config-volume" (OuterVolumeSpecName: "config-volume") pod "1e24673f-4e5a-493c-86b6-4dd1ec08fae1" (UID: "1e24673f-4e5a-493c-86b6-4dd1ec08fae1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.292423    1456 scope.go:117] "RemoveContainer" containerID="99c1411ad7ad70ac07b00c2dc4839c0d5ace6920239fc54336eba254e81f86b4"
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.305943    1456 scope.go:117] "RemoveContainer" containerID="99c1411ad7ad70ac07b00c2dc4839c0d5ace6920239fc54336eba254e81f86b4"
	Oct 02 07:01:39 addons-110926 kubelet[1456]: E1002 07:01:39.306790    1456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99c1411ad7ad70ac07b00c2dc4839c0d5ace6920239fc54336eba254e81f86b4\": not found" containerID="99c1411ad7ad70ac07b00c2dc4839c0d5ace6920239fc54336eba254e81f86b4"
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.306961    1456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99c1411ad7ad70ac07b00c2dc4839c0d5ace6920239fc54336eba254e81f86b4"} err="failed to get container status \"99c1411ad7ad70ac07b00c2dc4839c0d5ace6920239fc54336eba254e81f86b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"99c1411ad7ad70ac07b00c2dc4839c0d5ace6920239fc54336eba254e81f86b4\": not found"
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.377093    1456 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mds4d\" (UniqueName: \"kubernetes.io/projected/1e24673f-4e5a-493c-86b6-4dd1ec08fae1-kube-api-access-mds4d\") on node \"addons-110926\" DevicePath \"\""
	Oct 02 07:01:39 addons-110926 kubelet[1456]: I1002 07:01:39.377132    1456 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e24673f-4e5a-493c-86b6-4dd1ec08fae1-config-volume\") on node \"addons-110926\" DevicePath \"\""
	Oct 02 07:01:40 addons-110926 kubelet[1456]: E1002 07:01:40.245482    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:01:40 addons-110926 kubelet[1456]: I1002 07:01:40.247018    1456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e24673f-4e5a-493c-86b6-4dd1ec08fae1" path="/var/lib/kubelet/pods/1e24673f-4e5a-493c-86b6-4dd1ec08fae1/volumes"
	Oct 02 07:01:47 addons-110926 kubelet[1456]: E1002 07:01:47.243832    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:01:54 addons-110926 kubelet[1456]: E1002 07:01:54.244904    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:01:58 addons-110926 kubelet[1456]: I1002 07:01:58.366919    1456 scope.go:117] "RemoveContainer" containerID="e6021edb430f3dfdb5e0aeda7fc40f893e90da9657abfe03769d59361de51d87"
	Oct 02 07:01:58 addons-110926 kubelet[1456]: I1002 07:01:58.378612    1456 scope.go:117] "RemoveContainer" containerID="e6021edb430f3dfdb5e0aeda7fc40f893e90da9657abfe03769d59361de51d87"
	Oct 02 07:01:58 addons-110926 kubelet[1456]: E1002 07:01:58.379633    1456 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e6021edb430f3dfdb5e0aeda7fc40f893e90da9657abfe03769d59361de51d87\": not found" containerID="e6021edb430f3dfdb5e0aeda7fc40f893e90da9657abfe03769d59361de51d87"
	Oct 02 07:01:58 addons-110926 kubelet[1456]: I1002 07:01:58.380566    1456 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e6021edb430f3dfdb5e0aeda7fc40f893e90da9657abfe03769d59361de51d87"} err="failed to get container status \"e6021edb430f3dfdb5e0aeda7fc40f893e90da9657abfe03769d59361de51d87\": rpc error: code = NotFound desc = an error occurred when try to find container \"e6021edb430f3dfdb5e0aeda7fc40f893e90da9657abfe03769d59361de51d87\": not found"
	Oct 02 07:01:58 addons-110926 kubelet[1456]: I1002 07:01:58.428920    1456 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-747j8\" (UniqueName: \"kubernetes.io/projected/d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4-kube-api-access-747j8\") pod \"d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4\" (UID: \"d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4\") "
	Oct 02 07:01:58 addons-110926 kubelet[1456]: I1002 07:01:58.435105    1456 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4-kube-api-access-747j8" (OuterVolumeSpecName: "kube-api-access-747j8") pod "d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4" (UID: "d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4"). InnerVolumeSpecName "kube-api-access-747j8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 02 07:01:58 addons-110926 kubelet[1456]: I1002 07:01:58.530138    1456 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-747j8\" (UniqueName: \"kubernetes.io/projected/d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4-kube-api-access-747j8\") on node \"addons-110926\" DevicePath \"\""
	Oct 02 07:01:59 addons-110926 kubelet[1456]: I1002 07:01:59.235644    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92r7\" (UniqueName: \"kubernetes.io/projected/0c7d368f-9ebb-4f12-a286-d31aaa2e3a2d-kube-api-access-h92r7\") pod \"headlamp-85f8f8dc54-cdxjc\" (UID: \"0c7d368f-9ebb-4f12-a286-d31aaa2e3a2d\") " pod="headlamp/headlamp-85f8f8dc54-cdxjc"
	Oct 02 07:02:00 addons-110926 kubelet[1456]: I1002 07:02:00.273423    1456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4" path="/var/lib/kubelet/pods/d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4/volumes"
	Oct 02 07:02:09 addons-110926 kubelet[1456]: E1002 07:02:09.245210    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	
	
	==> storage-provisioner [4829c9264d5b3ae1fc764ede230e33d7252374c2ec8cd6385777a58debef5783] <==
	W1002 07:01:53.123609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:55.126661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:55.132319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:57.135589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:57.142519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:59.173681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:59.185901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:01.190334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:01.198787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:03.202231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:03.206801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:05.210462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:05.215060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:07.221712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:07.228244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:09.231358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:09.235726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:11.239317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:11.243790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:13.247257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:13.251099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:15.254315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:15.258337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:17.262572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:02:17.267276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-110926 -n addons-110926
helpers_test.go:269: (dbg) Run:  kubectl --context addons-110926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path headlamp-85f8f8dc54-cdxjc ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-110926 describe pod task-pv-pod test-local-path headlamp-85f8f8dc54-cdxjc ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-110926 describe pod task-pv-pod test-local-path headlamp-85f8f8dc54-cdxjc ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5: exit status 1 (108.580713ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-110926/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:56:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mn5jj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mn5jj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-110926
	  Normal   Pulling    3m6s (x5 over 6m3s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m6s (x5 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m6s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    58s (x21 over 6m2s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     58s (x21 over 6m2s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-km9d9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-km9d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-85f8f8dc54-cdxjc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-lw8gl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bq878" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-s7sx5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-110926 describe pod task-pv-pod test-local-path headlamp-85f8f8dc54-cdxjc ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.923498658s)
--- FAIL: TestAddons/parallel/CSI (391.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (346.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-110926 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-110926 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-110926 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-110926
helpers_test.go:243: (dbg) docker inspect addons-110926:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	        "Created": "2025-10-02T06:36:47.077600034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 814321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T06:36:47.138474038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/hosts",
	        "LogPath": "/var/lib/docker/containers/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d/e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d-json.log",
	        "Name": "/addons-110926",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-110926:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-110926",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e88a06110ea17d178fc2cb0aaa8c6c49c1fa4ac62b6d5cc23fc71a81526b4c4d",
	                "LowerDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4a24fb873d2f39ea94619db41de95d7146c12a8d8bfd43b4862fb05b858ff48d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-110926",
	                "Source": "/var/lib/docker/volumes/addons-110926/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-110926",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-110926",
	                "name.minikube.sigs.k8s.io": "addons-110926",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e03dfd9e44981225a70f6640c6b12a48805938cfdd54b566df7bddffa824b2d",
	            "SandboxKey": "/var/run/docker/netns/6e03dfd9e449",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-110926": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:3c:a1:2d:84:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c2d471fc3c60a7f5a83ca737cf0a22c0c0076227d91a7e348867826280521af7",
	                    "EndpointID": "885b90e051ad80837eb5c6d3c161821bbf8a3c111f24b170e0bc233d0690c448",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-110926",
	                        "e88a06110ea1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-110926 -n addons-110926
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 logs -n 25: (1.334579314s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-492765 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-492765                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ -o=json --download-only -p download-only-547243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ minikube               │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-547243                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-492765                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-492765   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-547243                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-547243   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p download-docker-533728 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p download-docker-533728                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-533728 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ --download-only -p binary-mirror-704812 --alsologtostderr --binary-mirror http://127.0.0.1:37961 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ -p binary-mirror-704812                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-704812   │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ addons  │ enable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ addons  │ disable dashboard -p addons-110926                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ start   │ -p addons-110926 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:43 UTC │
	│ addons  │ addons-110926 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ ip      │ addons-110926 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons  │ addons-110926 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-110926          │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │ 02 Oct 25 06:56 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:36:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:36:21.580334  813918 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:36:21.580482  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580492  813918 out.go:374] Setting ErrFile to fd 2...
	I1002 06:36:21.580497  813918 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:21.580834  813918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 06:36:21.581311  813918 out.go:368] Setting JSON to false
	I1002 06:36:21.582265  813918 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22731,"bootTime":1759364251,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 06:36:21.582336  813918 start.go:140] virtualization:  
	I1002 06:36:21.585831  813918 out.go:179] * [addons-110926] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:36:21.589067  813918 notify.go:220] Checking for updates...
	I1002 06:36:21.589658  813918 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 06:36:21.592579  813918 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:36:21.595634  813918 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:36:21.598400  813918 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 06:36:21.601243  813918 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 06:36:21.604214  813918 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 06:36:21.607495  813918 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:36:21.629855  813918 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:36:21.629989  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.693096  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.683464105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.693212  813918 docker.go:318] overlay module found
	I1002 06:36:21.698158  813918 out.go:179] * Using the docker driver based on user configuration
	I1002 06:36:21.700959  813918 start.go:304] selected driver: docker
	I1002 06:36:21.700986  813918 start.go:924] validating driver "docker" against <nil>
	I1002 06:36:21.701000  813918 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 06:36:21.701711  813918 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:21.758634  813918 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-02 06:36:21.749346343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:21.758811  813918 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:36:21.759085  813918 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:36:21.762043  813918 out.go:179] * Using Docker driver with root privileges
	I1002 06:36:21.764916  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:21.764987  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:21.765005  813918 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:36:21.765078  813918 start.go:348] cluster config:
	{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:21.768148  813918 out.go:179] * Starting "addons-110926" primary control-plane node in "addons-110926" cluster
	I1002 06:36:21.771007  813918 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 06:36:21.773962  813918 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:36:21.776817  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:21.776869  813918 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 06:36:21.776883  813918 cache.go:58] Caching tarball of preloaded images
	I1002 06:36:21.776920  813918 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:36:21.776978  813918 preload.go:233] Found /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 06:36:21.776988  813918 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 06:36:21.777328  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:21.777357  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json: {Name:mk2f8f9458f5bc5a3d522cc7bc03c497073f8f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:21.792651  813918 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:36:21.792805  813918 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:36:21.792830  813918 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 06:36:21.792839  813918 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 06:36:21.792848  813918 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 06:36:21.792856  813918 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 06:36:39.840628  813918 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 06:36:39.840677  813918 cache.go:232] Successfully downloaded all kic artifacts
	I1002 06:36:39.840706  813918 start.go:360] acquireMachinesLock for addons-110926: {Name:mk5b3ba2eb8943c76c6ef867a9f0efe000290e8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 06:36:39.840853  813918 start.go:364] duration metric: took 124.262µs to acquireMachinesLock for "addons-110926"
	I1002 06:36:39.840884  813918 start.go:93] Provisioning new machine with config: &{Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:36:39.840959  813918 start.go:125] createHost starting for "" (driver="docker")
	I1002 06:36:39.844345  813918 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 06:36:39.844567  813918 start.go:159] libmachine.API.Create for "addons-110926" (driver="docker")
	I1002 06:36:39.844615  813918 client.go:168] LocalClient.Create starting
	I1002 06:36:39.844744  813918 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem
	I1002 06:36:40.158293  813918 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem
	I1002 06:36:40.423695  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 06:36:40.439045  813918 cli_runner.go:211] docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 06:36:40.439144  813918 network_create.go:284] running [docker network inspect addons-110926] to gather additional debugging logs...
	I1002 06:36:40.439166  813918 cli_runner.go:164] Run: docker network inspect addons-110926
	W1002 06:36:40.454853  813918 cli_runner.go:211] docker network inspect addons-110926 returned with exit code 1
	I1002 06:36:40.454885  813918 network_create.go:287] error running [docker network inspect addons-110926]: docker network inspect addons-110926: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-110926 not found
	I1002 06:36:40.454900  813918 network_create.go:289] output of [docker network inspect addons-110926]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-110926 not found
	
	** /stderr **
	I1002 06:36:40.454994  813918 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:40.471187  813918 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3c190}
	I1002 06:36:40.471239  813918 network_create.go:124] attempt to create docker network addons-110926 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 06:36:40.471291  813918 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-110926 addons-110926
	I1002 06:36:40.528426  813918 network_create.go:108] docker network addons-110926 192.168.49.0/24 created
	I1002 06:36:40.528461  813918 kic.go:121] calculated static IP "192.168.49.2" for the "addons-110926" container
	I1002 06:36:40.528550  813918 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 06:36:40.544507  813918 cli_runner.go:164] Run: docker volume create addons-110926 --label name.minikube.sigs.k8s.io=addons-110926 --label created_by.minikube.sigs.k8s.io=true
	I1002 06:36:40.560870  813918 oci.go:103] Successfully created a docker volume addons-110926
	I1002 06:36:40.560961  813918 cli_runner.go:164] Run: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 06:36:42.684275  813918 cli_runner.go:217] Completed: docker run --rm --name addons-110926-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --entrypoint /usr/bin/test -v addons-110926:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.123276184s)
	I1002 06:36:42.684309  813918 oci.go:107] Successfully prepared a docker volume addons-110926
	I1002 06:36:42.684338  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:42.684360  813918 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 06:36:42.684441  813918 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 06:36:47.011851  813918 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-110926:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.327364513s)
	I1002 06:36:47.011897  813918 kic.go:203] duration metric: took 4.327533581s to extract preloaded images to volume ...
	W1002 06:36:47.012040  813918 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 06:36:47.012157  813918 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 06:36:47.062619  813918 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-110926 --name addons-110926 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-110926 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-110926 --network addons-110926 --ip 192.168.49.2 --volume addons-110926:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 06:36:47.379291  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Running}}
	I1002 06:36:47.400798  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.426150  813918 cli_runner.go:164] Run: docker exec addons-110926 stat /var/lib/dpkg/alternatives/iptables
	I1002 06:36:47.477926  813918 oci.go:144] the created container "addons-110926" has a running status.
	I1002 06:36:47.477953  813918 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa...
	I1002 06:36:47.781138  813918 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 06:36:47.806163  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.827180  813918 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 06:36:47.827199  813918 kic_runner.go:114] Args: [docker exec --privileged addons-110926 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 06:36:47.891791  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:36:47.911592  813918 machine.go:93] provisionDockerMachine start ...
	I1002 06:36:47.911695  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:47.930991  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:47.931327  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:47.931345  813918 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 06:36:47.931960  813918 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57194->127.0.0.1:33863: read: connection reset by peer
	I1002 06:36:51.072477  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.072569  813918 ubuntu.go:182] provisioning hostname "addons-110926"
	I1002 06:36:51.072685  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.090401  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.090720  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.090740  813918 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-110926 && echo "addons-110926" | sudo tee /etc/hostname
	I1002 06:36:51.236050  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-110926
	
	I1002 06:36:51.236138  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.258063  813918 main.go:141] libmachine: Using SSH client type: native
	I1002 06:36:51.258373  813918 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33863 <nil> <nil>}
	I1002 06:36:51.258395  813918 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-110926' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-110926/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-110926' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 06:36:51.388860  813918 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 06:36:51.388887  813918 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-811293/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-811293/.minikube}
	I1002 06:36:51.388910  813918 ubuntu.go:190] setting up certificates
	I1002 06:36:51.388920  813918 provision.go:84] configureAuth start
	I1002 06:36:51.388983  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.405357  813918 provision.go:143] copyHostCerts
	I1002 06:36:51.405461  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem (1123 bytes)
	I1002 06:36:51.405586  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem (1679 bytes)
	I1002 06:36:51.405650  813918 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem (1078 bytes)
	I1002 06:36:51.405711  813918 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem org=jenkins.addons-110926 san=[127.0.0.1 192.168.49.2 addons-110926 localhost minikube]
	I1002 06:36:51.612527  813918 provision.go:177] copyRemoteCerts
	I1002 06:36:51.612597  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 06:36:51.612649  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.629460  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.725298  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 06:36:51.743050  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 06:36:51.760643  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 06:36:51.777747  813918 provision.go:87] duration metric: took 388.803174ms to configureAuth
	I1002 06:36:51.777772  813918 ubuntu.go:206] setting minikube options for container-runtime
	I1002 06:36:51.777954  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:36:51.777961  813918 machine.go:96] duration metric: took 3.866353513s to provisionDockerMachine
	I1002 06:36:51.777968  813918 client.go:171] duration metric: took 11.933342699s to LocalClient.Create
	I1002 06:36:51.777991  813918 start.go:167] duration metric: took 11.933425856s to libmachine.API.Create "addons-110926"
	I1002 06:36:51.778000  813918 start.go:293] postStartSetup for "addons-110926" (driver="docker")
	I1002 06:36:51.778009  813918 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 06:36:51.778057  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 06:36:51.778100  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.794568  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:51.888438  813918 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 06:36:51.891559  813918 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 06:36:51.891587  813918 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 06:36:51.891598  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/addons for local assets ...
	I1002 06:36:51.891662  813918 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/files for local assets ...
	I1002 06:36:51.891684  813918 start.go:296] duration metric: took 113.678581ms for postStartSetup
	I1002 06:36:51.891998  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:51.908094  813918 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/config.json ...
	I1002 06:36:51.908374  813918 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 06:36:51.908417  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:51.924432  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.017816  813918 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 06:36:52.022845  813918 start.go:128] duration metric: took 12.181870526s to createHost
	I1002 06:36:52.022873  813918 start.go:83] releasing machines lock for "addons-110926", held for 12.182006857s
	I1002 06:36:52.022950  813918 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-110926
	I1002 06:36:52.040319  813918 ssh_runner.go:195] Run: cat /version.json
	I1002 06:36:52.040381  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.040643  813918 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 06:36:52.040709  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:36:52.064673  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.078579  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:36:52.168362  813918 ssh_runner.go:195] Run: systemctl --version
	I1002 06:36:52.263150  813918 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 06:36:52.267928  813918 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 06:36:52.267998  813918 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 06:36:52.294529  813918 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 06:36:52.294574  813918 start.go:495] detecting cgroup driver to use...
	I1002 06:36:52.294607  813918 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 06:36:52.294670  813918 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 06:36:52.309592  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 06:36:52.322252  813918 docker.go:218] disabling cri-docker service (if available) ...
	I1002 06:36:52.322343  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 06:36:52.339306  813918 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 06:36:52.357601  813918 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 06:36:52.498437  813918 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 06:36:52.636139  813918 docker.go:234] disabling docker service ...
	I1002 06:36:52.636222  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 06:36:52.659149  813918 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 06:36:52.672149  813918 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 06:36:52.790045  813918 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 06:36:52.904510  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 06:36:52.917512  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 06:36:52.931680  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 06:36:52.940606  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 06:36:52.949651  813918 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 06:36:52.949722  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 06:36:52.958437  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.967122  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 06:36:52.975524  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 06:36:52.984274  813918 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 06:36:52.992118  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 06:36:53.000891  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 06:36:53.011203  813918 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 06:36:53.020137  813918 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 06:36:53.027434  813918 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 06:36:53.034538  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.146732  813918 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 06:36:53.259109  813918 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 06:36:53.259213  813918 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 06:36:53.262865  813918 start.go:563] Will wait 60s for crictl version
	I1002 06:36:53.262951  813918 ssh_runner.go:195] Run: which crictl
	I1002 06:36:53.266209  813918 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 06:36:53.294330  813918 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 06:36:53.294471  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.317070  813918 ssh_runner.go:195] Run: containerd --version
	I1002 06:36:53.342544  813918 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 06:36:53.345439  813918 cli_runner.go:164] Run: docker network inspect addons-110926 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 06:36:53.361595  813918 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 06:36:53.365182  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.374561  813918 kubeadm.go:883] updating cluster {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 06:36:53.374681  813918 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 06:36:53.374737  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.399251  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.399274  813918 containerd.go:534] Images already preloaded, skipping extraction
	I1002 06:36:53.399339  813918 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 06:36:53.423479  813918 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 06:36:53.423504  813918 cache_images.go:85] Images are preloaded, skipping loading
	I1002 06:36:53.423513  813918 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1002 06:36:53.423602  813918 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-110926 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 06:36:53.423672  813918 ssh_runner.go:195] Run: sudo crictl info
	I1002 06:36:53.448450  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:36:53.448474  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:53.448496  813918 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 06:36:53.448523  813918 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-110926 NodeName:addons-110926 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 06:36:53.448665  813918 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-110926"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 06:36:53.448861  813918 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 06:36:53.457671  813918 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 06:36:53.457745  813918 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 06:36:53.466514  813918 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1002 06:36:53.480222  813918 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 06:36:53.492979  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1002 06:36:53.506618  813918 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 06:36:53.510443  813918 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 06:36:53.519937  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:36:53.633003  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:36:53.653268  813918 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926 for IP: 192.168.49.2
	I1002 06:36:53.653291  813918 certs.go:195] generating shared ca certs ...
	I1002 06:36:53.653331  813918 certs.go:227] acquiring lock for ca certs: {Name:mk33b75296d4c02eee9bab3e9582ce8896a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:53.654149  813918 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key
	I1002 06:36:54.554249  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt ...
	I1002 06:36:54.554277  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt: {Name:mk2139057332209b98dbb746fb9a256d2b754164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554459  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key ...
	I1002 06:36:54.554470  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key: {Name:mkcae11ed523222e33231ecbd86e12b64a288b28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.554546  813918 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key
	I1002 06:36:54.895364  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt ...
	I1002 06:36:54.895399  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt: {Name:mke2bb76dd7b81d2d26af5e116b652209f0542b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.895600  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key ...
	I1002 06:36:54.895614  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key: {Name:mkc32897a4730ab5fb973fb69d1a38ca87d85c6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:54.896344  813918 certs.go:257] generating profile certs ...
	I1002 06:36:54.896423  813918 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key
	I1002 06:36:54.896442  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt with IP's: []
	I1002 06:36:55.419216  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt ...
	I1002 06:36:55.419259  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: {Name:mk10e15791cbf0b0edd868b4fdb8e230e5e309e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419452  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key ...
	I1002 06:36:55.419466  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.key: {Name:mk9f0a92cebc1827b3a9e95b7f53c1d4b6a59638 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.419563  813918 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549
	I1002 06:36:55.419584  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 06:36:55.722878  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 ...
	I1002 06:36:55.722908  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549: {Name:mk85eea21d417032742d45805e5f307e924f0055 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.723654  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 ...
	I1002 06:36:55.723671  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549: {Name:mkf298fb25e09f690a5e28cc66f4a6b37f67e15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:55.724361  813918 certs.go:382] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt
	I1002 06:36:55.724446  813918 certs.go:386] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key.bb376549 -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key
	I1002 06:36:55.724499  813918 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key
	I1002 06:36:55.724522  813918 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt with IP's: []
	I1002 06:36:56.363048  813918 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt ...
	I1002 06:36:56.363081  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt: {Name:mk4c25ab58ebf52954efb245b3c0c0d9e1c6bfe1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.363911  813918 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key ...
	I1002 06:36:56.363932  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key: {Name:mk7f28565479e9a862d5049acbcab89444bf5a22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:56.364713  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 06:36:56.364779  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem (1078 bytes)
	I1002 06:36:56.364814  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem (1123 bytes)
	I1002 06:36:56.364842  813918 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem (1679 bytes)
	I1002 06:36:56.365421  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 06:36:56.384138  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 06:36:56.402907  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 06:36:56.420429  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 06:36:56.438118  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 06:36:56.455787  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 06:36:56.473374  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 06:36:56.490901  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 06:36:56.509097  813918 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 06:36:56.526744  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 06:36:56.539426  813918 ssh_runner.go:195] Run: openssl version
	I1002 06:36:56.545473  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 06:36:56.553848  813918 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557589  813918 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.557674  813918 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 06:36:56.599790  813918 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 06:36:56.608153  813918 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 06:36:56.611552  813918 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 06:36:56.611600  813918 kubeadm.go:400] StartCluster: {Name:addons-110926 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-110926 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:56.611680  813918 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 06:36:56.611736  813918 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 06:36:56.639982  813918 cri.go:89] found id: ""
	I1002 06:36:56.640052  813918 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 06:36:56.647729  813918 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 06:36:56.655474  813918 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 06:36:56.655568  813918 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 06:36:56.663121  813918 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 06:36:56.663142  813918 kubeadm.go:157] found existing configuration files:
	
	I1002 06:36:56.663221  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 06:36:56.670874  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 06:36:56.670972  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 06:36:56.678534  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 06:36:56.685938  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 06:36:56.685996  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 06:36:56.692708  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.699925  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 06:36:56.700015  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 06:36:56.707153  813918 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 06:36:56.714621  813918 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 06:36:56.714749  813918 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 06:36:56.722338  813918 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 06:36:56.759248  813918 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 06:36:56.759571  813918 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 06:36:56.790582  813918 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 06:36:56.790657  813918 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 06:36:56.790699  813918 kubeadm.go:318] OS: Linux
	I1002 06:36:56.790763  813918 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 06:36:56.790820  813918 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 06:36:56.790875  813918 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 06:36:56.790936  813918 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 06:36:56.790994  813918 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 06:36:56.791049  813918 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 06:36:56.791100  813918 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 06:36:56.791153  813918 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 06:36:56.791207  813918 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 06:36:56.880850  813918 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 06:36:56.880966  813918 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 06:36:56.881067  813918 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 06:36:56.886790  813918 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 06:36:56.890544  813918 out.go:252]   - Generating certificates and keys ...
	I1002 06:36:56.890681  813918 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 06:36:56.890776  813918 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 06:36:57.277686  813918 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 06:36:57.698690  813918 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 06:36:58.123771  813918 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 06:36:58.316428  813918 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 06:36:58.712844  813918 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 06:36:58.713106  813918 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.412304  813918 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 06:36:59.412590  813918 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-110926 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 06:36:59.506243  813918 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 06:37:00.458571  813918 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 06:37:00.702742  813918 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 06:37:00.703124  813918 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 06:37:01.245158  813918 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 06:37:01.470802  813918 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 06:37:01.723353  813918 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 06:37:01.786251  813918 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 06:37:02.286866  813918 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 06:37:02.287602  813918 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 06:37:02.290493  813918 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 06:37:02.293946  813918 out.go:252]   - Booting up control plane ...
	I1002 06:37:02.294063  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 06:37:02.294988  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 06:37:02.295992  813918 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 06:37:02.312503  813918 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 06:37:02.312871  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 06:37:02.320595  813918 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 06:37:02.321016  813918 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 06:37:02.321262  813918 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 06:37:02.457350  813918 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 06:37:02.457522  813918 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 06:37:03.461255  813918 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00198836s
	I1002 06:37:03.463308  813918 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 06:37:03.463532  813918 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 06:37:03.463645  813918 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 06:37:03.464191  813918 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 06:37:06.566691  813918 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.102303507s
	I1002 06:37:08.316492  813918 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.851816452s
	I1002 06:37:09.465139  813918 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001507743s
	I1002 06:37:09.489317  813918 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 06:37:09.522458  813918 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 06:37:09.556453  813918 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 06:37:09.556687  813918 kubeadm.go:318] [mark-control-plane] Marking the node addons-110926 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 06:37:09.572399  813918 kubeadm.go:318] [bootstrap-token] Using token: 7g41rx.fb6mqimdeeyoknq9
	I1002 06:37:09.575450  813918 out.go:252]   - Configuring RBAC rules ...
	I1002 06:37:09.575583  813918 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 06:37:09.580181  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 06:37:09.588090  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 06:37:09.592801  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 06:37:09.600582  813918 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 06:37:09.607878  813918 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 06:37:09.872917  813918 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 06:37:10.299814  813918 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 06:37:10.872732  813918 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 06:37:10.874055  813918 kubeadm.go:318] 
	I1002 06:37:10.874135  813918 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 06:37:10.874146  813918 kubeadm.go:318] 
	I1002 06:37:10.874227  813918 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 06:37:10.874248  813918 kubeadm.go:318] 
	I1002 06:37:10.874283  813918 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 06:37:10.874350  813918 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 06:37:10.874409  813918 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 06:37:10.874417  813918 kubeadm.go:318] 
	I1002 06:37:10.874473  813918 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 06:37:10.874482  813918 kubeadm.go:318] 
	I1002 06:37:10.874532  813918 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 06:37:10.874540  813918 kubeadm.go:318] 
	I1002 06:37:10.874595  813918 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 06:37:10.874679  813918 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 06:37:10.874756  813918 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 06:37:10.874764  813918 kubeadm.go:318] 
	I1002 06:37:10.874852  813918 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 06:37:10.874936  813918 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 06:37:10.874945  813918 kubeadm.go:318] 
	I1002 06:37:10.875033  813918 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875146  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 \
	I1002 06:37:10.875172  813918 kubeadm.go:318] 	--control-plane 
	I1002 06:37:10.875181  813918 kubeadm.go:318] 
	I1002 06:37:10.875270  813918 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 06:37:10.875279  813918 kubeadm.go:318] 
	I1002 06:37:10.875365  813918 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 7g41rx.fb6mqimdeeyoknq9 \
	I1002 06:37:10.875475  813918 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 
	I1002 06:37:10.878324  813918 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 06:37:10.878562  813918 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 06:37:10.878676  813918 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 06:37:10.878697  813918 cni.go:84] Creating CNI manager for ""
	I1002 06:37:10.878705  813918 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:37:10.881877  813918 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 06:37:10.884817  813918 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 06:37:10.889466  813918 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 06:37:10.889488  813918 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 06:37:10.902465  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 06:37:11.181141  813918 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 06:37:11.181229  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.181309  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-110926 minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=addons-110926 minikube.k8s.io/primary=true
	I1002 06:37:11.362613  813918 ops.go:34] apiserver oom_adj: -16
	I1002 06:37:11.362717  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:11.863387  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.363462  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:12.863468  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.362840  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:13.863815  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.363244  813918 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 06:37:14.495136  813918 kubeadm.go:1113] duration metric: took 3.313961954s to wait for elevateKubeSystemPrivileges
	I1002 06:37:14.495171  813918 kubeadm.go:402] duration metric: took 17.883574483s to StartCluster
	I1002 06:37:14.495189  813918 settings.go:142] acquiring lock: {Name:mkfabb257d5e6dc89516b7f3eecfb5ad470245b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.495908  813918 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:37:14.496318  813918 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/kubeconfig: {Name:mk61b1a16c6c070d43ba1e4fed7f7f8861077db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:37:14.497144  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 06:37:14.497165  813918 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 06:37:14.497416  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.497447  813918 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1002 06:37:14.497542  813918 addons.go:69] Setting yakd=true in profile "addons-110926"
	I1002 06:37:14.497556  813918 addons.go:238] Setting addon yakd=true in "addons-110926"
	I1002 06:37:14.497579  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.497665  813918 addons.go:69] Setting inspektor-gadget=true in profile "addons-110926"
	I1002 06:37:14.497681  813918 addons.go:238] Setting addon inspektor-gadget=true in "addons-110926"
	I1002 06:37:14.497701  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.498032  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498105  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.498760  813918 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.498784  813918 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-110926"
	I1002 06:37:14.498819  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.499233  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.504834  813918 addons.go:69] Setting metrics-server=true in profile "addons-110926"
	I1002 06:37:14.504923  813918 addons.go:238] Setting addon metrics-server=true in "addons-110926"
	I1002 06:37:14.504988  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.505608  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.507518  813918 out.go:179] * Verifying Kubernetes components...
	I1002 06:37:14.507725  813918 addons.go:69] Setting cloud-spanner=true in profile "addons-110926"
	I1002 06:37:14.507753  813918 addons.go:238] Setting addon cloud-spanner=true in "addons-110926"
	I1002 06:37:14.507795  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.508276  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.519123  813918 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-110926"
	I1002 06:37:14.519204  813918 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-110926"
	I1002 06:37:14.519258  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523209  813918 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-110926"
	I1002 06:37:14.523335  813918 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:14.523396  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.523909  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.524419  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.536906  813918 addons.go:69] Setting registry=true in profile "addons-110926"
	I1002 06:37:14.536941  813918 addons.go:238] Setting addon registry=true in "addons-110926"
	I1002 06:37:14.536983  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.537475  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.539289  813918 addons.go:69] Setting default-storageclass=true in profile "addons-110926"
	I1002 06:37:14.558568  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-110926"
	I1002 06:37:14.559019  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.559239  813918 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 06:37:14.541208  813918 addons.go:69] Setting registry-creds=true in profile "addons-110926"
	I1002 06:37:14.561178  813918 addons.go:238] Setting addon registry-creds=true in "addons-110926"
	I1002 06:37:14.561363  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.541231  813918 addons.go:69] Setting storage-provisioner=true in profile "addons-110926"
	I1002 06:37:14.563047  813918 addons.go:238] Setting addon storage-provisioner=true in "addons-110926"
	I1002 06:37:14.563932  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.566547  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541239  813918 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-110926"
	I1002 06:37:14.579820  813918 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-110926"
	I1002 06:37:14.580221  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.586764  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.541246  813918 addons.go:69] Setting volcano=true in profile "addons-110926"
	I1002 06:37:14.607872  813918 addons.go:238] Setting addon volcano=true in "addons-110926"
	I1002 06:37:14.541349  813918 addons.go:69] Setting volumesnapshots=true in profile "addons-110926"
	I1002 06:37:14.607929  813918 addons.go:238] Setting addon volumesnapshots=true in "addons-110926"
	I1002 06:37:14.607950  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.556898  813918 addons.go:69] Setting gcp-auth=true in profile "addons-110926"
	I1002 06:37:14.624993  813918 mustload.go:65] Loading cluster: addons-110926
	I1002 06:37:14.625253  813918 config.go:182] Loaded profile config "addons-110926": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 06:37:14.625626  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556924  813918 addons.go:69] Setting ingress=true in profile "addons-110926"
	I1002 06:37:14.631873  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.632366  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.556929  813918 addons.go:69] Setting ingress-dns=true in profile "addons-110926"
	I1002 06:37:14.632643  813918 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1002 06:37:14.631728  813918 addons.go:238] Setting addon ingress=true in "addons-110926"
	I1002 06:37:14.633388  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.633841  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.650708  813918 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1002 06:37:14.654882  813918 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1002 06:37:14.654909  813918 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1002 06:37:14.654981  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.659338  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.671893  813918 addons.go:238] Setting addon ingress-dns=true in "addons-110926"
	I1002 06:37:14.671956  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.672451  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.681943  813918 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1002 06:37:14.682145  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1002 06:37:14.682171  813918 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1002 06:37:14.682243  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.730779  813918 addons.go:238] Setting addon default-storageclass=true in "addons-110926"
	I1002 06:37:14.730824  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.731463  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.736081  813918 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 06:37:14.743901  813918 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:14.748859  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1002 06:37:14.749029  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.798861  813918 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1002 06:37:14.801456  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1002 06:37:14.801501  813918 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1002 06:37:14.801637  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.840051  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1002 06:37:14.844935  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1002 06:37:14.848913  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1002 06:37:14.851733  813918 out.go:179]   - Using image docker.io/registry:3.0.0
	I1002 06:37:14.854520  813918 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1002 06:37:14.857638  813918 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1002 06:37:14.858717  813918 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:14.858738  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1002 06:37:14.858817  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.860526  813918 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1002 06:37:14.860546  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1002 06:37:14.860632  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.893874  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1002 06:37:14.894058  813918 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1002 06:37:14.897434  813918 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:14.897458  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1002 06:37:14.897547  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.918428  813918 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-110926"
	I1002 06:37:14.918472  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.918875  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:14.921121  813918 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1002 06:37:14.925950  813918 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:14.925974  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1002 06:37:14.926042  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.945293  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:14.949541  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1002 06:37:14.956438  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1002 06:37:14.957575  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.966829  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1002 06:37:14.967843  813918 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 06:37:14.983357  813918 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1002 06:37:14.991256  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1002 06:37:14.991531  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1002 06:37:14.991690  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:14.992663  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:14.992678  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 06:37:14.992742  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:14.996512  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:14.996904  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1002 06:37:14.996921  813918 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1002 06:37:14.996989  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.005391  813918 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1002 06:37:15.005812  813918 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1002 06:37:15.006640  813918 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:15.006661  813918 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 06:37:15.006739  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.008284  813918 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1002 06:37:15.009342  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1002 06:37:15.009438  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1002 06:37:15.009541  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.028005  813918 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1002 06:37:15.033152  813918 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:15.033183  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1002 06:37:15.033275  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.054617  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.055541  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:15.055750  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:15.055763  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1002 06:37:15.055832  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.061085  813918 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.061106  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1002 06:37:15.061173  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.074564  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.081642  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.111200  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.136860  813918 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 06:37:15.148801  813918 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1002 06:37:15.151741  813918 out.go:179]   - Using image docker.io/busybox:stable
	I1002 06:37:15.156261  813918 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:15.156284  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1002 06:37:15.156355  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:15.169924  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.193516  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.199715  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.214370  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237018  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.237601  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.243930  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.244071  813918 retry.go:31] will retry after 305.561491ms: ssh: handshake failed: EOF
	I1002 06:37:15.251932  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.255879  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.259811  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.259836  813918 retry.go:31] will retry after 210.072349ms: ssh: handshake failed: EOF
	I1002 06:37:15.265683  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:15.272079  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:15.565323  813918 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1002 06:37:15.565348  813918 retry.go:31] will retry after 243.153386ms: ssh: handshake failed: EOF
	I1002 06:37:15.846286  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1002 06:37:15.846311  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1002 06:37:15.944527  813918 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:15.944599  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1002 06:37:15.970354  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1002 06:37:15.985885  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1002 06:37:16.012665  813918 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1002 06:37:16.012693  813918 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1002 06:37:16.019458  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1002 06:37:16.019485  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1002 06:37:16.043516  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1002 06:37:16.043539  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1002 06:37:16.060218  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1002 06:37:16.072624  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 06:37:16.090843  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1002 06:37:16.096286  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1002 06:37:16.096364  813918 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1002 06:37:16.184119  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:16.205029  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1002 06:37:16.206409  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 06:37:16.211099  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1002 06:37:16.221140  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1002 06:37:16.281478  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1002 06:37:16.281550  813918 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1002 06:37:16.294235  813918 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1002 06:37:16.294308  813918 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1002 06:37:16.314044  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1002 06:37:16.314122  813918 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1002 06:37:16.314878  813918 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.314923  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1002 06:37:16.334271  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1002 06:37:16.435552  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1002 06:37:16.435625  813918 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1002 06:37:16.486137  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1002 06:37:16.508790  813918 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.508817  813918 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1002 06:37:16.527074  813918 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.79094086s)
	I1002 06:37:16.527103  813918 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 06:37:16.527172  813918 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.390287567s)
	I1002 06:37:16.527930  813918 node_ready.go:35] waiting up to 6m0s for node "addons-110926" to be "Ready" ...
	I1002 06:37:16.692302  813918 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.692321  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1002 06:37:16.739744  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1002 06:37:16.739768  813918 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1002 06:37:16.803024  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1002 06:37:16.866551  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:16.918292  813918 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:16.918317  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1002 06:37:16.976907  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1002 06:37:16.976934  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1002 06:37:17.032696  813918 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-110926" context rescaled to 1 replicas
	I1002 06:37:17.174089  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1002 06:37:17.174115  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1002 06:37:17.194531  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1002 06:37:17.590550  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1002 06:37:17.590575  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1002 06:37:17.985718  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1002 06:37:17.985751  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1002 06:37:18.258016  813918 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1002 06:37:18.258042  813918 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1002 06:37:18.426273  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1002 06:37:18.426298  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	W1002 06:37:18.558468  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:18.892311  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1002 06:37:18.892338  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1002 06:37:19.094159  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1002 06:37:19.094182  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1002 06:37:19.262380  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1002 06:37:19.262404  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1002 06:37:19.445644  813918 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1002 06:37:19.445669  813918 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1002 06:37:19.720946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1002 06:37:21.041084  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:21.578538  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.608100964s)
	I1002 06:37:21.578618  813918 addons.go:479] Verifying addon ingress=true in "addons-110926"
	I1002 06:37:21.579021  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.5930618s)
	I1002 06:37:21.579193  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.518951153s)
	I1002 06:37:21.579261  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.506611096s)
	I1002 06:37:21.582085  813918 out.go:179] * Verifying ingress addon...
	I1002 06:37:21.586543  813918 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1002 06:37:21.655191  813918 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1002 06:37:21.655263  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.115015  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.583411  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1002 06:37:22.583564  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.610354  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	I1002 06:37:22.612089  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:22.737638  813918 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1002 06:37:22.767377  813918 addons.go:238] Setting addon gcp-auth=true in "addons-110926"
	I1002 06:37:22.767434  813918 host.go:66] Checking if "addons-110926" exists ...
	I1002 06:37:22.767894  813918 cli_runner.go:164] Run: docker container inspect addons-110926 --format={{.State.Status}}
	I1002 06:37:22.793827  813918 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1002 06:37:22.793887  813918 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-110926
	I1002 06:37:22.830306  813918 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33863 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/addons-110926/id_rsa Username:docker}
	W1002 06:37:23.096079  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:23.101826  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.167688  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (7.0767591s)
	I1002 06:37:23.167794  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.983606029s)
	W1002 06:37:23.167817  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167835  813918 retry.go:31] will retry after 146.597414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:23.167865  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.9627765s)
	I1002 06:37:23.167924  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.96145652s)
	I1002 06:37:23.167989  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.956824802s)
	I1002 06:37:23.168168  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.946960517s)
	I1002 06:37:23.168215  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.833882657s)
	I1002 06:37:23.168229  813918 addons.go:479] Verifying addon registry=true in "addons-110926"
	I1002 06:37:23.168432  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.682270471s)
	I1002 06:37:23.168504  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.365459957s)
	I1002 06:37:23.168515  813918 addons.go:479] Verifying addon metrics-server=true in "addons-110926"
	I1002 06:37:23.168593  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.302013657s)
	W1002 06:37:23.168612  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168628  813918 retry.go:31] will retry after 145.945512ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1002 06:37:23.168670  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.974112429s)
	I1002 06:37:23.171600  813918 out.go:179] * Verifying registry addon...
	I1002 06:37:23.175423  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1002 06:37:23.175675  813918 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-110926 service yakd-dashboard -n yakd-dashboard
	
	I1002 06:37:23.215812  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.494815173s)
	I1002 06:37:23.215842  813918 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-110926"
	I1002 06:37:23.218592  813918 out.go:179] * Verifying csi-hostpath-driver addon...
	I1002 06:37:23.218725  813918 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1002 06:37:23.222422  813918 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1002 06:37:23.223098  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1002 06:37:23.225306  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1002 06:37:23.225336  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1002 06:37:23.265230  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1002 06:37:23.265257  813918 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1002 06:37:23.271284  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:23.271303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.301079  813918 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.301100  813918 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1002 06:37:23.315262  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:23.315479  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1002 06:37:23.362438  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:23.362461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:23.371447  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1002 06:37:23.590215  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:23.690482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:23.726143  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.091791  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.192769  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.240956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.605709  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:24.703226  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:24.726522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:24.936549  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.621028233s)
	I1002 06:37:24.936718  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.621420893s)
	W1002 06:37:24.936789  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936837  813918 retry.go:31] will retry after 561.608809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:24.936908  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.565434855s)
	I1002 06:37:24.939978  813918 addons.go:479] Verifying addon gcp-auth=true in "addons-110926"
	I1002 06:37:24.944986  813918 out.go:179] * Verifying gcp-auth addon...
	I1002 06:37:24.948596  813918 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1002 06:37:24.951413  813918 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1002 06:37:24.951434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.090748  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.178550  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.226439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.452219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:25.499574  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:25.531518  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:25.589865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:25.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:25.726612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:25.951542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.090750  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.179030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.226732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:26.317076  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.317226  813918 retry.go:31] will retry after 583.727209ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:26.452148  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:26.589788  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:26.683078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:26.727068  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:26.901144  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:26.952896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:27.091613  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:27.226561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.451348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:27.531649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:27.591525  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:27.683031  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:27.712297  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.712326  813918 retry.go:31] will retry after 648.169313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:27.726104  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:27.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.090463  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.191332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.226482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.360900  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:28.452621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:28.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:28.684494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:28.726619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:28.952459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.090817  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.180514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:29.185770  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.185799  813918 retry.go:31] will retry after 638.486695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:29.226864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.451636  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:29.589804  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:29.683512  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:29.726574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:29.824932  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:29.952114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:30.032649  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:30.090885  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.179094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:30.226154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.452508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:30.592222  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:30.684732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:37:30.698805  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.698840  813918 retry.go:31] will retry after 1.386655025s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:30.726921  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:30.951637  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.090673  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.178664  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.226447  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:31.590331  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:31.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:31.726337  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:31.952229  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:32.086627  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:32.090653  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.179238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.226721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:32.452452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:32.530986  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:32.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:32.683805  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:32.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:32.884199  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.884242  813918 retry.go:31] will retry after 1.764941661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:32.952014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.090182  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.179042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.226874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.451508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:33.590092  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:33.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:33.725974  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:33.951836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:34.090782  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.178819  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.226525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.452486  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:34.531295  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:34.590650  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:34.649946  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:34.686870  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:34.726748  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:34.952390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:35.093119  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.179530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.226048  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.451917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:35.484501  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.484530  813918 retry.go:31] will retry after 6.007881753s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:35.590705  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:35.683551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:35.726503  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:35.952327  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.090688  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.191481  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.226150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.452471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:36.590726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:36.683932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:36.727072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:36.951909  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:37.032811  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:37.090041  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.226683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.451377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:37.590155  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:37.683502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:37.726422  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:37.951666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.226290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.452969  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:38.589891  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:38.678445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:38.726426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:38.951569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:39.090363  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.178554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.226682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.451688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:39.531480  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:39.589495  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:39.683560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:39.726605  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:39.951696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.090353  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.179467  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.226430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.451667  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:40.590213  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:40.682834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:40.726735  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:40.951452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.090424  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.178251  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.225935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:41.493320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1002 06:37:41.531920  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:41.590388  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:41.682815  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:41.727080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:41.951832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.097513  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.180007  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.228335  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:37:42.397373  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.397404  813918 retry.go:31] will retry after 6.331757331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:42.452908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:42.590432  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:42.683443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:42.726508  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:42.952318  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.090165  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.178978  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.225896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.451987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:43.590602  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:43.678528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:43.726661  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:43.951424  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:44.031312  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:44.090520  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.226569  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.451727  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:44.596784  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:44.697937  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:44.726640  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:44.951415  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.090703  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.179490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.227523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.451631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:45.589687  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:45.683601  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:45.727673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:45.951624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:46.031927  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:46.090068  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.178708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.226451  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:46.590533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:46.678457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:46.726355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:46.952193  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.090132  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.179505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.226590  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.451700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:47.590360  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:47.683040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:47.725863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:47.952219  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:48.090642  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.178440  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.226648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.451752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:48.531666  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:48.590304  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:48.678358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:48.726321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:48.729320  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:37:48.951489  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:49.091175  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.180116  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.226101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.452407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:49.530266  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.530298  813918 retry.go:31] will retry after 12.414314859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:37:49.590599  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:49.683495  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:49.726800  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:49.951645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.090598  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.178639  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.226627  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.451589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:50.590544  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:50.682812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:50.726927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:50.951882  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:51.030659  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:51.089892  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.179276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.225934  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.451935  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:51.589726  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:51.683005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:51.725957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:51.951996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.091773  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.178278  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.226119  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.451977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:52.590251  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:52.683413  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:52.726061  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:52.952248  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:53.031163  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:53.090127  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.178995  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.227062  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.452030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:53.590043  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:53.683319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:53.726034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:53.951951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.090498  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.178558  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.226461  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.451500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:54.590406  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:54.683724  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:54.726962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:54.952006  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:37:55.031442  813918 node_ready.go:57] node "addons-110926" has "Ready":"False" status (will retry)
	I1002 06:37:55.091214  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.179018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.225804  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:55.590030  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:55.682894  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:55.726632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:55.951851  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.090254  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.179316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.225963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.451980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:56.589903  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:56.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:56.726710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:56.969890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.039661  813918 node_ready.go:49] node "addons-110926" is "Ready"
	I1002 06:37:57.039759  813918 node_ready.go:38] duration metric: took 40.511800003s for node "addons-110926" to be "Ready" ...
	I1002 06:37:57.039788  813918 api_server.go:52] waiting for apiserver process to appear ...
	I1002 06:37:57.039875  813918 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 06:37:57.093303  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.094841  813918 api_server.go:72] duration metric: took 42.597646349s to wait for apiserver process to appear ...
	I1002 06:37:57.094869  813918 api_server.go:88] waiting for apiserver healthz status ...
	I1002 06:37:57.094891  813918 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 06:37:57.110477  813918 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 06:37:57.112002  813918 api_server.go:141] control plane version: v1.34.1
	I1002 06:37:57.112039  813918 api_server.go:131] duration metric: took 17.162356ms to wait for apiserver health ...
	I1002 06:37:57.112050  813918 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 06:37:57.164751  813918 system_pods.go:59] 19 kube-system pods found
	I1002 06:37:57.164836  813918 system_pods.go:61] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.164843  813918 system_pods.go:61] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.164850  813918 system_pods.go:61] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.164855  813918 system_pods.go:61] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.164860  813918 system_pods.go:61] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.164866  813918 system_pods.go:61] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.164895  813918 system_pods.go:61] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.164906  813918 system_pods.go:61] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.164911  813918 system_pods.go:61] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.164915  813918 system_pods.go:61] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.164927  813918 system_pods.go:61] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.164931  813918 system_pods.go:61] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.164936  813918 system_pods.go:61] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.164940  813918 system_pods.go:61] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.164952  813918 system_pods.go:61] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.164956  813918 system_pods.go:61] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.164969  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.164978  813918 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.164984  813918 system_pods.go:61] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.164996  813918 system_pods.go:74] duration metric: took 52.940352ms to wait for pod list to return data ...
	I1002 06:37:57.165020  813918 default_sa.go:34] waiting for default service account to be created ...
	I1002 06:37:57.180144  813918 default_sa.go:45] found service account: "default"
	I1002 06:37:57.180178  813918 default_sa.go:55] duration metric: took 15.149731ms for default service account to be created ...
	I1002 06:37:57.180188  813918 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 06:37:57.222552  813918 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1002 06:37:57.222577  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.223365  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.223410  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.223418  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.223424  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.223428  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending
	I1002 06:37:57.223442  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.223456  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.223462  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.223474  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.223481  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.223485  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.223492  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.223496  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.223503  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.223507  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.223510  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.223514  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.223521  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.223531  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending
	I1002 06:37:57.223536  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.223550  813918 retry.go:31] will retry after 203.421597ms: missing components: kube-dns
	I1002 06:37:57.317769  813918 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1002 06:37:57.317813  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.437762  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.437803  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.437810  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending
	I1002 06:37:57.437815  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending
	I1002 06:37:57.437821  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.437826  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.437841  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.437853  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.437869  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.437874  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending
	I1002 06:37:57.437877  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.437882  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.437900  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending
	I1002 06:37:57.437905  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending
	I1002 06:37:57.437909  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending
	I1002 06:37:57.437913  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending
	I1002 06:37:57.437926  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending
	I1002 06:37:57.437937  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437946  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.437955  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending
	I1002 06:37:57.437969  813918 retry.go:31] will retry after 264.460556ms: missing components: kube-dns
	I1002 06:37:57.457586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:57.591211  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:57.684302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:57.707934  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:57.707975  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:57.707990  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:57.708000  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:57.708018  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:57.708030  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:57.708035  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:57.708040  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:57.708051  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:57.708113  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:57.708129  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:57.708172  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:57.708184  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:57.708195  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:57.708207  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:57.708220  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:57.708228  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:57.708247  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708255  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:57.708270  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:57.708285  813918 retry.go:31] will retry after 422.985157ms: missing components: kube-dns
	I1002 06:37:57.742917  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:57.952834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.091317  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.137271  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.137312  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1002 06:37:58.137322  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.137331  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.137338  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.137342  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.137350  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.137355  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.137359  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.137366  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.137375  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.137380  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.137386  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.137399  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.137411  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.137417  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.137426  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.137433  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137444  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.137451  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.137467  813918 retry.go:31] will retry after 586.146569ms: missing components: kube-dns
	I1002 06:37:58.178407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.235878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.452723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:58.614086  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:58.705574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:58.752782  813918 system_pods.go:86] 19 kube-system pods found
	I1002 06:37:58.752871  813918 system_pods.go:89] "coredns-66bc5c9577-s68lt" [2d3e272c-d302-4d35-a5ef-9975fc94eb91] Running
	I1002 06:37:58.752902  813918 system_pods.go:89] "csi-hostpath-attacher-0" [d7f8cb91-f20f-4078-a2ac-821951d89bf7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1002 06:37:58.752951  813918 system_pods.go:89] "csi-hostpath-resizer-0" [81e5b5f9-ee61-4a6e-82b3-962546097c19] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1002 06:37:58.752984  813918 system_pods.go:89] "csi-hostpathplugin-mg6q4" [e95e4d0a-1cc7-4f8b-9500-3b7042b37779] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1002 06:37:58.753015  813918 system_pods.go:89] "etcd-addons-110926" [5b7c4657-f07d-4cc3-ac03-d14065e9cc4c] Running
	I1002 06:37:58.753040  813918 system_pods.go:89] "kindnet-zb4h8" [333c3958-8762-4a65-bfdc-d8207ffd9bbb] Running
	I1002 06:37:58.753071  813918 system_pods.go:89] "kube-apiserver-addons-110926" [dee175c5-d0ea-4a5d-b3ca-34320a1dd34f] Running
	I1002 06:37:58.753100  813918 system_pods.go:89] "kube-controller-manager-addons-110926" [c63f7e84-d40c-4ece-8b2c-ae8d220189f1] Running
	I1002 06:37:58.753128  813918 system_pods.go:89] "kube-ingress-dns-minikube" [ef8b2745-553d-44a6-984e-b4ab801f79f7] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1002 06:37:58.753156  813918 system_pods.go:89] "kube-proxy-4zvzf" [47cfdd22-8955-4a33-aae8-8f2703bfe262] Running
	I1002 06:37:58.753185  813918 system_pods.go:89] "kube-scheduler-addons-110926" [3db0cde2-faea-4b97-ae64-fc88c6df02b4] Running
	I1002 06:37:58.753215  813918 system_pods.go:89] "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 06:37:58.753246  813918 system_pods.go:89] "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1002 06:37:58.753287  813918 system_pods.go:89] "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1002 06:37:58.753323  813918 system_pods.go:89] "registry-creds-764b6fb674-s7sx5" [0b84bec7-8d9d-4d30-9860-3d491871c922] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1002 06:37:58.753344  813918 system_pods.go:89] "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1002 06:37:58.753369  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-69zvz" [b6b98890-4555-429c-825e-71090acecfd6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753402  813918 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xwmkw" [461d82bb-acbf-4956-8cfe-77037a7681eb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1002 06:37:58.753429  813918 system_pods.go:89] "storage-provisioner" [ef958e30-53c7-432f-9681-b7cf0e8ae0a1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1002 06:37:58.753455  813918 system_pods.go:126] duration metric: took 1.573257013s to wait for k8s-apps to be running ...
	I1002 06:37:58.753478  813918 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 06:37:58.753557  813918 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 06:37:58.756092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:58.811373  813918 system_svc.go:56] duration metric: took 57.886892ms WaitForService to wait for kubelet
	I1002 06:37:58.811449  813918 kubeadm.go:586] duration metric: took 44.314256903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 06:37:58.811493  813918 node_conditions.go:102] verifying NodePressure condition ...
	I1002 06:37:58.822249  813918 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 06:37:58.822353  813918 node_conditions.go:123] node cpu capacity is 2
	I1002 06:37:58.822383  813918 node_conditions.go:105] duration metric: took 10.860686ms to run NodePressure ...
	I1002 06:37:58.822420  813918 start.go:241] waiting for startup goroutines ...
	I1002 06:37:58.952958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.090849  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.194378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.293675  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.453551  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:37:59.590199  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:37:59.683743  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:37:59.727149  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:37:59.952566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.095335  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.179662  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.233910  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.456053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:00.590708  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:00.683163  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:00.726621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:00.952293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.091005  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.179669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.229085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.453177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:01.591279  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:01.686492  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:01.728097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:01.945617  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:01.952810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.090686  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.179657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.228561  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:02.452023  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:02.591508  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:02.683154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:02.726517  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1002 06:38:02.824299  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.824331  813918 retry.go:31] will retry after 15.691806375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:02.952380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.090609  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.178940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.227145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.453458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:03.590296  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:03.683856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:03.728071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:03.952283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.091664  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.192092  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.226458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.451525  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:04.589908  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:04.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:04.730121  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:04.952803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.091341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.179246  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.227241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.453166  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:05.590701  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:05.678855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:05.729441  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:05.955761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.089976  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.179542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.229669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.451663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:06.590195  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:06.684205  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:06.784414  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:06.952931  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.090633  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.179271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.226645  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.452374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:07.590940  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:07.683125  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:07.726314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:07.958423  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.089866  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.178562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.226685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.452416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:08.589770  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:08.683752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:08.726663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:08.952521  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.090474  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.179170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.227253  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.453357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:09.593377  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:09.684130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:09.728107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:09.951741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.090984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.181589  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.227685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.451548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:10.590276  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:10.684315  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:10.726459  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:10.951730  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.094349  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.181744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.226987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.452812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:11.589905  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:11.684532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:11.727310  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:11.952952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.090716  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.178859  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.227650  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.452172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:12.590288  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:12.684016  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:12.727454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:12.952912  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.089873  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.179357  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.226476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.452233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:13.590829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:13.683018  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:13.727319  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:13.952542  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.091679  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.180387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.229029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.453283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:14.593239  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:14.684343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:14.727726  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:14.951591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.090426  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.452049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:15.591161  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:15.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:15.726700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:15.951767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.090224  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.179552  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.230312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.452584  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:16.590173  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:16.682977  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:16.728540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:16.952802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.089859  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.227103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.452592  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:17.589995  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:17.683737  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:17.727124  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:17.952069  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.090149  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.227063  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.452179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:18.516517  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:18.591793  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:18.683303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:18.726902  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:18.951881  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:19.090407  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.179390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.280453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.453053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1002 06:38:19.506255  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.506287  813918 retry.go:31] will retry after 24.46264979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:19.591253  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:19.683612  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:19.727161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:19.951604  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.090820  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.179282  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.226653  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.451718  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:20.590946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:20.683133  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:20.726532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:20.952036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.090532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.179243  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.227567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.452954  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:21.590813  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:21.683988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:21.726704  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:21.955708  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.090204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.179312  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.226758  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.451702  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:22.590436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:22.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:22.726810  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:22.952518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.090640  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.226432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.452557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:23.589536  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:23.683265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:23.726387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:23.951660  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.089946  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.179032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.231204  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.452096  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:24.591481  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:24.684150  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:24.727560  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:24.951946  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.090564  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.180720  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.227767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.452182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:25.590552  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:25.683982  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:25.727145  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:25.952505  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.096097  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.199167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.227457  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.451429  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:26.589950  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:26.682877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:26.728464  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:26.952825  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.090029  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.178693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.227164  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.451877  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:27.590889  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:27.694494  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:27.726681  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:27.953022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.090718  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.178712  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.226849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.451699  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:28.590634  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:28.680358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:28.727806  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:28.952386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.090865  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.192262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.296040  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.458956  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:29.592945  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:29.696528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:29.727745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:29.960224  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.108669  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.181176  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.229077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.453626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:30.590233  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:30.688386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:30.727482  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:30.962237  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.091531  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.180490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.229509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.452749  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:31.591491  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:31.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:31.726970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:31.952445  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.091436  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.190896  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.228381  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.452736  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:32.590064  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:32.684030  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:32.726390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:32.951770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.090909  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.228094  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.452528  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:33.590375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:33.684236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:33.727041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:33.952649  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1002 06:38:34.090690  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.178430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.227390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:34.452820  813918 kapi.go:107] duration metric: took 1m9.5042235s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1002 06:38:34.456518  813918 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-110926 cluster.
	I1002 06:38:34.459299  813918 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1002 06:38:34.462514  813918 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1002 06:38:34.590456  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:34.683783  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:34.726876  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.091815  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.192181  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.225996  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:35.590532  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:35.683177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:35.727077  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.090514  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.178631  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.226657  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:36.590586  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:36.684420  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:36.726745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.090769  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.193241  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.227067  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:37.591255  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:37.682734  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:37.727297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.089746  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.178757  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.227287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:38.591547  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:38.691271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:38.727108  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.106229  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.202273  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.228516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:39.589988  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:39.679442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:39.726895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.094511  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.179452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.237240  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:40.601942  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:40.693742  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:40.738619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.091045  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.191515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.226632  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:41.591721  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:41.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:41.726863  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.091861  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.204238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.227557  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:42.590297  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:42.683271  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:42.727579  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.091018  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.179103  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.226868  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.591731  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:43.684032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:43.726500  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:43.969756  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:38:44.090261  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.179366  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.228188  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:44.592341  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:44.686940  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:44.727784  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.092283  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.178091  813918 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.20829608s)
	W1002 06:38:45.178208  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.178250  813918 retry.go:31] will retry after 22.26617142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 06:38:45.179543  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.236432  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:45.590441  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:45.679320  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:45.727621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1002 06:38:46.090405  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.178426  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:46.226663  813918 kapi.go:107] duration metric: took 1m23.00356106s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1002 06:38:46.589619  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:46.683261  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.089734  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.179374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:47.592660  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:47.683768  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.090007  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.178644  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:48.591375  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:48.683509  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.089829  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:49.591248  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:49.691276  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.089984  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.179171  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:50.590696  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:50.683346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.089635  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:51.590723  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:51.683306  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.090482  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.190696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:52.590622  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:52.678787  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.090135  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.179421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:53.590204  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:53.684303  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.089742  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.178289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:54.591054  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:54.692841  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.091556  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:55.590831  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:55.682962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.090533  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:56.590836  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:56.683198  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.090570  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:57.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:57.683132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.089540  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.179053  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:58.590839  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:58.683962  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.090850  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.190988  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:38:59.590732  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:38:59.685032  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.114597  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.198802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:00.590774  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:00.683043  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.090771  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.178723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:01.590300  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:01.684480  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.091506  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.180050  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:02.591681  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:02.686987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.092104  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.180518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:03.590550  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:03.684084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.091333  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:04.590364  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:04.685968  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.091208  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.179114  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:05.593116  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:05.693180  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.099807  813918 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1002 06:39:06.192434  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:06.591063  813918 kapi.go:107] duration metric: took 1m45.004516868s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1002 06:39:06.691162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.178929  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:07.445436  813918 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1002 06:39:07.683258  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:08.179496  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1002 06:39:08.321958  813918 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 06:39:08.322050  813918 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 06:39:08.683452  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.179353  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:09.686227  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.179510  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:10.683355  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.179458  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:11.679580  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.179918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:12.684042  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.178652  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:13.685874  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.179294  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:14.688744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.178402  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:15.684134  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.178182  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:16.682141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.179203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:17.684865  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.183409  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:18.683201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.178867  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:19.679950  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.179378  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:20.683751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.179070  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:21.679127  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.178339  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:22.682554  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.179809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:23.684571  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.178890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:24.684796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.178633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:25.683087  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.178740  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:26.683803  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.178621  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:27.679141  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.178920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:28.684290  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.179325  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:29.680059  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.180120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:30.683936  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.178444  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:31.683250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:32.684538  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.179130  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:33.684267  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.179364  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:34.684136  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.178488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:35.683770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.179826  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:36.683998  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.179895  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:37.683890  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.180914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:38.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.179513  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:39.686625  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.179680  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:40.684314  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.178731  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:41.682866  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.180532  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:42.685515  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:43.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.178761  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:44.678674  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.180677  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:45.683093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.178745  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:46.682966  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.178714  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:47.687786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.180034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:48.682439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.179416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:49.685544  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.179302  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:50.685100  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.179287  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:51.683778  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.179021  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:52.679097  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.178970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:53.684700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.179476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:54.684994  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.178796  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:55.679165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.178666  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:56.684967  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.178854  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:57.678696  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.179624  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:58.683296  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.180450  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:39:59.687218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.195539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:00.689354  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.178732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:01.685212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.179265  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:02.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.178860  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:03.678460  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.178855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:04.686281  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.179400  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:05.679175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.179017  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:06.683057  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.179262  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:07.684658  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.179829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:08.683098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.178903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:09.686212  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.179744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:10.682952  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:11.685085  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.179154  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:12.683453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.179437  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:13.683490  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.179250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:14.684690  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.179775  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:15.684387  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.178957  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:16.678523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.179146  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:17.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.179689  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:18.682903  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.178772  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:19.685172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.178915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:20.684537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.178688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:21.681514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.179537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:22.683064  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.178976  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:23.682793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.179279  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:24.685175  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.178553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:25.683682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.179629  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:26.679433  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.178986  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:27.683516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.178938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:28.684313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.179037  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:29.682849  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.180161  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:30.683924  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.178283  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:31.683997  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.179049  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:32.685786  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.179179  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:33.682830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.179638  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:34.683135  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.178744  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:35.684184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:36.679174  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.179123  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:37.683396  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.179078  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:38.682970  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.179304  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:39.684431  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.179468  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:40.683907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.178963  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:41.684491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.180147  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:42.678812  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.178520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:43.679177  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.178790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:44.684374  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.179855  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:45.684397  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.179055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:46.685615  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.178939  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:47.680235  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.178829  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:48.682679  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.179766  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:49.686979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.178641  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:50.683095  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.178582  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:51.682578  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.179361  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:52.684019  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:53.683211  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.180830  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:54.685818  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.179776  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:55.683755  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.179597  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:56.683541  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.178536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:57.679350  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.183218  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:58.683948  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.179617  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:40:59.681398  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.200089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:00.683523  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.180022  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:01.682762  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.179798  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:02.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.179630  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:03.683920  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.178316  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:04.686534  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.179292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:05.683293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.178370  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:06.682944  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.178545  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:07.685071  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.179215  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:08.684453  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.178985  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:09.688380  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.179014  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:10.682840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.179693  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:11.683955  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.179386  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:12.679132  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.178565  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:13.680539  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:14.684344  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.179591  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:15.679368  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.178436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:16.683864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.180546  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:17.683586  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.179015  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:18.679618  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:19.684107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.178861  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:20.684034  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.178317  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:21.684041  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.178322  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:22.683407  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.179139  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:23.683117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.178439  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:24.685938  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.178476  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:25.683871  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.178257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:26.684421  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.178363  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:27.684075  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.178491  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:28.684622  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.179430  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:29.679029  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.179857  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:30.684822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.178471  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:31.682266  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.178454  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:32.683741  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:33.684238  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.179255  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:34.685850  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.179285  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:35.684332  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.178487  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:36.679840  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.178710  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:37.684329  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.179191  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:38.685465  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.179295  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:39.684802  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.179488  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:40.683626  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.179090  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:41.683827  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.211958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:42.683203  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.178389  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:43.683767  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.179683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:44.684688  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.179790  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:45.684540  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.179257  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:46.684514  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.178785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:47.683477  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.178765  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:48.684151  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.179311  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:49.684698  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.179522  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:50.684199  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.178816  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:51.683369  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.178888  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:52.683785  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.179801  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:53.684918  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.179419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:54.686564  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.179115  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:55.679606  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.179733  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:56.684036  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.178170  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:57.679142  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:58.679408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:41:59.685245  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.184879  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:00.679309  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:01.683892  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.180107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:02.685443  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.178916  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:03.682980  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.178340  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:04.685958  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.178346  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:05.678858  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.179520  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:06.685162  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.178663  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:07.683927  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.178987  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:08.683518  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.179084  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:09.685719  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.178949  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:10.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.179144  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:11.678751  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.178975  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:12.685293  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.178566  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:13.682732  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.179093  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:14.686648  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.178770  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:15.682752  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.179886  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:16.683072  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.178408  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:17.683343  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.179005  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:18.679908  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.178619  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:19.685331  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.179236  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:20.683822  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.179233  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:21.684864  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:22.684351  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.180700  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:23.683915  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.179907  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:24.683172  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.178856  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:25.683739  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.179113  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:26.684228  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.178497  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:27.680321  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.178685  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:28.684377  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.178668  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:29.683298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.178673  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:30.679836  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.179289  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:31.683809  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.179308  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:32.685527  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.179502  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:33.682722  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.179247  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:34.691933  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.178348  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:35.684101  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.178537  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:36.679390  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.178793  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:37.679292  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.178807  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:38.679635  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.179574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:39.685788  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.179536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:40.679723  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.178926  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:41.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.205259  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:42.678979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.178844  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:43.684358  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.178792  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:44.680055  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.183250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:45.685665  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.179382  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:46.683567  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.179323  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:47.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.179642  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:48.678672  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.179393  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:49.688221  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.178875  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:50.683313  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.178669  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:51.679683  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.179098  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:52.681721  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.181436  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:53.683878  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.179394  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:54.682260  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.179274  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:55.679117  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.178213  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:56.684682  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:57.679759  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.179473  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:58.683157  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.178763  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:42:59.679298  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.179659  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:00.683416  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.179914  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:01.684427  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.178932  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:02.684548  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.179404  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:03.683536  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.179167  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:04.685131  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.178507  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:05.683442  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.178516  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:06.679774  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.179201  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:07.679574  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.179120  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:08.683089  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.178834  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:09.684250  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.178466  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:10.684419  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.179951  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:11.680107  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.178342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:12.683530  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.179349  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:13.685184  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.178165  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:14.683342  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.179446  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:15.683832  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.179192  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:16.683553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.179562  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:17.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.179009  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:18.683979  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.179717  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:19.684080  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.179244  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:20.682553  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.178961  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:21.682187  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.179297  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:22.683633  813918 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1002 06:43:23.176387  813918 kapi.go:81] temporary error: getting Pods with label selector "kubernetes.io/minikube-addons=registry" : [client rate limiter Wait returned an error: context deadline exceeded]
	I1002 06:43:23.176421  813918 kapi.go:107] duration metric: took 6m0.001003242s to wait for kubernetes.io/minikube-addons=registry ...
	W1002 06:43:23.176505  813918 out.go:285] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I1002 06:43:23.179649  813918 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, default-storageclass, volcano, nvidia-device-plugin, storage-provisioner, registry-creds, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, gcp-auth, csi-hostpath-driver, ingress
	I1002 06:43:23.182525  813918 addons.go:514] duration metric: took 6m8.685068561s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner default-storageclass volcano nvidia-device-plugin storage-provisioner registry-creds ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots gcp-auth csi-hostpath-driver ingress]
	I1002 06:43:23.182578  813918 start.go:246] waiting for cluster config update ...
	I1002 06:43:23.182605  813918 start.go:255] writing updated cluster config ...
	I1002 06:43:23.182910  813918 ssh_runner.go:195] Run: rm -f paused
	I1002 06:43:23.186967  813918 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:23.191359  813918 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.195909  813918 pod_ready.go:94] pod "coredns-66bc5c9577-s68lt" is "Ready"
	I1002 06:43:23.195939  813918 pod_ready.go:86] duration metric: took 4.553514ms for pod "coredns-66bc5c9577-s68lt" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.198221  813918 pod_ready.go:83] waiting for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.202513  813918 pod_ready.go:94] pod "etcd-addons-110926" is "Ready"
	I1002 06:43:23.202537  813918 pod_ready.go:86] duration metric: took 4.291712ms for pod "etcd-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.204756  813918 pod_ready.go:83] waiting for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.208864  813918 pod_ready.go:94] pod "kube-apiserver-addons-110926" is "Ready"
	I1002 06:43:23.208890  813918 pod_ready.go:86] duration metric: took 4.040561ms for pod "kube-apiserver-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.211197  813918 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.591502  813918 pod_ready.go:94] pod "kube-controller-manager-addons-110926" is "Ready"
	I1002 06:43:23.591528  813918 pod_ready.go:86] duration metric: took 380.304031ms for pod "kube-controller-manager-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:23.792134  813918 pod_ready.go:83] waiting for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.192193  813918 pod_ready.go:94] pod "kube-proxy-4zvzf" is "Ready"
	I1002 06:43:24.192225  813918 pod_ready.go:86] duration metric: took 400.063711ms for pod "kube-proxy-4zvzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.391575  813918 pod_ready.go:83] waiting for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791416  813918 pod_ready.go:94] pod "kube-scheduler-addons-110926" is "Ready"
	I1002 06:43:24.791440  813918 pod_ready.go:86] duration metric: took 399.838153ms for pod "kube-scheduler-addons-110926" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 06:43:24.791453  813918 pod_ready.go:40] duration metric: took 1.604452407s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 06:43:24.848923  813918 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 06:43:24.852286  813918 out.go:179] * Done! kubectl is now configured to use "addons-110926" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	f006bdfdbfa9c       1611cd07b61d5       5 minutes ago       Running             busybox                                  0                   396f52d70bdd3       busybox                                    default
	8db7a8fd91b3a       bc6bf68f85c70       6 minutes ago       Running             registry                                 0                   2574946f7674b       registry-66898fdd98-926mp                  kube-system
	5f7d9891cc455       5ed383cb88c34       22 minutes ago      Running             controller                               0                   6c717320771c8       ingress-nginx-controller-9cc49f96f-srz99   ingress-nginx
	0308d38377e11       ee6d597e62dc8       22 minutes ago      Running             csi-snapshotter                          0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	25fa4fdbd3104       642ded511e141       22 minutes ago      Running             csi-provisioner                          0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	fc252b8568f42       922312104da8a       22 minutes ago      Running             liveness-probe                           0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	335d72204c3f1       08f6b2990811a       22 minutes ago      Running             hostpath                                 0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	5f68f17265ee3       deda3ad36c19b       22 minutes ago      Running             gadget                                   0                   4c1a07ae3ab5b       gadget-5sxf6                               gadget
	0e5a160912072       0107d56dbc0be       22 minutes ago      Running             node-driver-registrar                    0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	739d12f7cb55c       c67c707f59d87       22 minutes ago      Exited              patch                                    0                   bb748c608a5b6       ingress-nginx-admission-patch-bq878        ingress-nginx
	591026b1dba39       c67c707f59d87       22 minutes ago      Exited              create                                   0                   cb5a57455ef86       ingress-nginx-admission-create-lw8gl       ingress-nginx
	54cf7611bdf67       bc6c1e09a843d       22 minutes ago      Running             metrics-server                           0                   7dd5efc48ed3c       metrics-server-85b7d694d7-fg8z6            kube-system
	f6cb9c538a386       4d1e5c3e97420       22 minutes ago      Running             volume-snapshot-controller               0                   67891e8bc00da       snapshot-controller-7d9fbc56b8-xwmkw       kube-system
	0c9bf13466bdb       9a80d518f102c       22 minutes ago      Running             csi-attacher                             0                   292886057d9ec       csi-hostpath-attacher-0                    kube-system
	99c1411ad7ad7       7ce2150c8929b       22 minutes ago      Running             local-path-provisioner                   0                   b598bb55c93b6       local-path-provisioner-648f6765c9-xvgcs    local-path-storage
	5ad865f2d99af       7b85e0fbfd435       22 minutes ago      Running             registry-proxy                           0                   f2c6d58f83a8d       registry-proxy-bqxnl                       kube-system
	2b58aa20e457e       4d1e5c3e97420       22 minutes ago      Running             volume-snapshot-controller               0                   2f3e4307f0508       snapshot-controller-7d9fbc56b8-69zvz       kube-system
	e6021edb430f3       ccf6033de1d3c       22 minutes ago      Running             cloud-spanner-emulator                   0                   28bd5150ab50b       cloud-spanner-emulator-85f6b7fc65-zwxnx    default
	01c56e6095ea5       487fa743e1e22       23 minutes ago      Running             csi-resizer                              0                   507c852501681       csi-hostpath-resizer-0                     kube-system
	9ba807329b10c       1461903ec4fe9       23 minutes ago      Running             csi-external-health-monitor-controller   0                   78352623170f3       csi-hostpathplugin-mg6q4                   kube-system
	4829c9264d5b3       ba04bb24b9575       23 minutes ago      Running             storage-provisioner                      0                   cd62db6aa4ca0       storage-provisioner                        kube-system
	d607380a0ea95       138784d87c9c5       23 minutes ago      Running             coredns                                  0                   97bcb21e01196       coredns-66bc5c9577-s68lt                   kube-system
	001c4797204fc       b1a8c6f707935       23 minutes ago      Running             kindnet-cni                              0                   a8dbd581dae29       kindnet-zb4h8                              kube-system
	205ba78bdcdf4       05baa95f5142d       23 minutes ago      Running             kube-proxy                               0                   1c95f15f187e7       kube-proxy-4zvzf                           kube-system
	7d5d1641aee07       43911e833d64d       24 minutes ago      Running             kube-apiserver                           0                   111e5d5f57119       kube-apiserver-addons-110926               kube-system
	b56ea6dbe0e21       b5f57ec6b9867       24 minutes ago      Running             kube-scheduler                           0                   740338c713381       kube-scheduler-addons-110926               kube-system
	dd74ed9d21ed1       7eb2c6ff0c5a7       24 minutes ago      Running             kube-controller-manager                  0                   408527a4c051e       kube-controller-manager-addons-110926      kube-system
	8be3089b4391b       a1894772a478e       24 minutes ago      Running             etcd                                     0                   f832da367e6b5       etcd-addons-110926                         kube-system
	
	
	==> containerd <==
	Oct 02 06:59:42 addons-110926 containerd[753]: time="2025-10-02T06:59:42.695865607Z" level=info msg="stop pulling image docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: active requests=0, bytes read=11047"
	Oct 02 06:59:44 addons-110926 containerd[753]: time="2025-10-02T06:59:44.246624142Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 06:59:44 addons-110926 containerd[753]: time="2025-10-02T06:59:44.250068898Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:59:44 addons-110926 containerd[753]: time="2025-10-02T06:59:44.381422552Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 06:59:44 addons-110926 containerd[753]: time="2025-10-02T06:59:44.675123526Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 06:59:44 addons-110926 containerd[753]: time="2025-10-02T06:59:44.675444460Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10979"
	Oct 02 07:00:22 addons-110926 containerd[753]: time="2025-10-02T07:00:22.063861256Z" level=info msg="StopPodSandbox for \"d9539fceb9e984d61894f92014e622c29d12865aa9455996109cbe2df6e7e41a\""
	Oct 02 07:00:22 addons-110926 containerd[753]: time="2025-10-02T07:00:22.080070036Z" level=info msg="received exit event container_id:\"d9539fceb9e984d61894f92014e622c29d12865aa9455996109cbe2df6e7e41a\"  id:\"d9539fceb9e984d61894f92014e622c29d12865aa9455996109cbe2df6e7e41a\"  pid:20983  exit_status:137  exited_at:{seconds:1759388422  nanos:79723527}"
	Oct 02 07:00:22 addons-110926 containerd[753]: time="2025-10-02T07:00:22.111204022Z" level=info msg="shim disconnected" id=d9539fceb9e984d61894f92014e622c29d12865aa9455996109cbe2df6e7e41a namespace=k8s.io
	Oct 02 07:00:22 addons-110926 containerd[753]: time="2025-10-02T07:00:22.111249477Z" level=warning msg="cleaning up after shim disconnected" id=d9539fceb9e984d61894f92014e622c29d12865aa9455996109cbe2df6e7e41a namespace=k8s.io
	Oct 02 07:00:22 addons-110926 containerd[753]: time="2025-10-02T07:00:22.111287958Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 02 07:00:22 addons-110926 containerd[753]: time="2025-10-02T07:00:22.149133313Z" level=info msg="TearDown network for sandbox \"d9539fceb9e984d61894f92014e622c29d12865aa9455996109cbe2df6e7e41a\" successfully"
	Oct 02 07:00:22 addons-110926 containerd[753]: time="2025-10-02T07:00:22.149193742Z" level=info msg="StopPodSandbox for \"d9539fceb9e984d61894f92014e622c29d12865aa9455996109cbe2df6e7e41a\" returns successfully"
	Oct 02 07:00:52 addons-110926 containerd[753]: time="2025-10-02T07:00:52.404945094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0,Uid:bd253650-5e7f-4e53-be29-d3d7b4a65573,Namespace:local-path-storage,Attempt:0,}"
	Oct 02 07:00:52 addons-110926 containerd[753]: time="2025-10-02T07:00:52.509652144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0,Uid:bd253650-5e7f-4e53-be29-d3d7b4a65573,Namespace:local-path-storage,Attempt:0,} returns sandbox id \"decbb16edd6c17c523339773834054249f6f30b570e7e518caea43134d137d15\""
	Oct 02 07:00:52 addons-110926 containerd[753]: time="2025-10-02T07:00:52.512189718Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 07:00:52 addons-110926 containerd[753]: time="2025-10-02T07:00:52.514655107Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:00:52 addons-110926 containerd[753]: time="2025-10-02T07:00:52.647966404Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:00:53 addons-110926 containerd[753]: time="2025-10-02T07:00:53.046947725Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:00:53 addons-110926 containerd[753]: time="2025-10-02T07:00:53.047000663Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=13274"
	Oct 02 07:01:06 addons-110926 containerd[753]: time="2025-10-02T07:01:06.247062109Z" level=info msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Oct 02 07:01:06 addons-110926 containerd[753]: time="2025-10-02T07:01:06.254718943Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:01:06 addons-110926 containerd[753]: time="2025-10-02T07:01:06.395632505Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:01:06 addons-110926 containerd[753]: time="2025-10-02T07:01:06.697369321Z" level=error msg="PullImage \"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\" failed" error="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:01:06 addons-110926 containerd[753]: time="2025-10-02T07:01:06.697415933Z" level=info msg="stop pulling image docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: active requests=0, bytes read=10978"
	
	
	==> coredns [d607380a0ea95122f5da6e25cf2168aa3ea1ff11f2efdf89f4a8c2d0e5150d23] <==
	[INFO] 10.244.0.10:50787 - 30802 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001101513s
	[INFO] 10.244.0.10:50787 - 65077 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000141051s
	[INFO] 10.244.0.10:50787 - 27995 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000170818s
	[INFO] 10.244.0.10:57105 - 40777 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000193627s
	[INFO] 10.244.0.10:57105 - 44741 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000111316s
	[INFO] 10.244.0.10:57105 - 25201 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000093429s
	[INFO] 10.244.0.10:57105 - 38571 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084034s
	[INFO] 10.244.0.10:57105 - 24208 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076166s
	[INFO] 10.244.0.10:57105 - 56789 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000139811s
	[INFO] 10.244.0.10:57105 - 46307 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001361429s
	[INFO] 10.244.0.10:57105 - 10819 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.000882336s
	[INFO] 10.244.0.10:57105 - 62476 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000092289s
	[INFO] 10.244.0.10:57105 - 29096 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.0000767s
	[INFO] 10.244.0.10:43890 - 1641 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123016s
	[INFO] 10.244.0.10:43890 - 1411 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136259s
	[INFO] 10.244.0.10:42249 - 55738 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014663s
	[INFO] 10.244.0.10:42249 - 56025 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119479s
	[INFO] 10.244.0.10:58600 - 45308 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000118355s
	[INFO] 10.244.0.10:58600 - 45497 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012044s
	[INFO] 10.244.0.10:58816 - 38609 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.0013196s
	[INFO] 10.244.0.10:58816 - 38806 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001169622s
	[INFO] 10.244.0.10:53569 - 36791 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135397s
	[INFO] 10.244.0.10:53569 - 36387 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116156s
	[INFO] 10.244.0.26:45800 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000375054s
	[INFO] 10.244.0.26:44881 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000100551s
	
	
	==> describe nodes <==
	Name:               addons-110926
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-110926
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=addons-110926
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T06_37_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-110926
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-110926"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 06:37:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-110926
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:01:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:00:28 +0000   Thu, 02 Oct 2025 06:37:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-110926
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 852f460d42254382a140bbeecb584248
	  System UUID:                c6ea63c0-97bd-4894-b738-fecc8ba127ac
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (25 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  default                     cloud-spanner-emulator-85f6b7fc65-zwxnx                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gadget                      gadget-5sxf6                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-srz99                      100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         23m
	  kube-system                 coredns-66bc5c9577-s68lt                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     23m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 csi-hostpathplugin-mg6q4                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 etcd-addons-110926                                            100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         23m
	  kube-system                 kindnet-zb4h8                                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      23m
	  kube-system                 kube-apiserver-addons-110926                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-addons-110926                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-4zvzf                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-addons-110926                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 metrics-server-85b7d694d7-fg8z6                               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         23m
	  kube-system                 registry-66898fdd98-926mp                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 registry-creds-764b6fb674-s7sx5                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 registry-proxy-bqxnl                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 snapshot-controller-7d9fbc56b8-69zvz                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 snapshot-controller-7d9fbc56b8-xwmkw                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  local-path-storage          helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-xvgcs                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23m                kube-proxy       
	  Normal   Starting                 24m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 24m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   Starting                 23m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 23m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  23m                kubelet          Node addons-110926 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23m                kubelet          Node addons-110926 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23m                kubelet          Node addons-110926 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           23m                node-controller  Node addons-110926 event: Registered Node addons-110926 in Controller
	  Normal   NodeReady                23m                kubelet          Node addons-110926 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8be3089b4391b68797b9ff88ff2b0c3043e3281ca30bcb48a82169b26fb4081d] <==
	{"level":"warn","ts":"2025-10-02T06:37:24.126822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:24.150001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.461328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.478170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.495681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.527887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.548456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.563248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.624874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.689649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.719544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.736879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.755892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.770836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T06:37:44.790478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53380","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T06:39:30.364216Z","caller":"traceutil/trace.go:172","msg":"trace[134372730] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"123.100543ms","start":"2025-10-02T06:39:30.241102Z","end":"2025-10-02T06:39:30.364202Z","steps":["trace[134372730] 'process raft request'  (duration: 122.981302ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-02T06:47:04.880918Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1925}
	{"level":"info","ts":"2025-10-02T06:47:04.920117Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1925,"took":"38.610713ms","hash":2612370864,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":5120000,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2025-10-02T06:47:04.920180Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2612370864,"revision":1925,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T06:52:04.887964Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2405}
	{"level":"info","ts":"2025-10-02T06:52:04.907361Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2405,"took":"18.449885ms","hash":1927945438,"current-db-size-bytes":8695808,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":3727360,"current-db-size-in-use":"3.7 MB"}
	{"level":"info","ts":"2025-10-02T06:52:04.907428Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1927945438,"revision":2405,"compact-revision":1925}
	{"level":"info","ts":"2025-10-02T06:57:04.895109Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2864}
	{"level":"info","ts":"2025-10-02T06:57:04.926419Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2864,"took":"30.706949ms","hash":805286141,"current-db-size-bytes":9138176,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":5545984,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2025-10-02T06:57:04.926487Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":805286141,"revision":2864,"compact-revision":2405}
	
	
	==> kernel <==
	 07:01:07 up  6:43,  0 user,  load average: 0.37, 0.77, 1.27
	Linux addons-110926 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [001c4797204fc8489af667e5dc44dc2de85bde6fbbb94189af8eaa6e51b826b8] <==
	I1002 06:59:06.724871       1 main.go:301] handling current node
	I1002 06:59:16.723349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:59:16.723446       1 main.go:301] handling current node
	I1002 06:59:26.722397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:59:26.722443       1 main.go:301] handling current node
	I1002 06:59:36.727529       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:59:36.727570       1 main.go:301] handling current node
	I1002 06:59:46.729324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:59:46.729358       1 main.go:301] handling current node
	I1002 06:59:56.722821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 06:59:56.722854       1 main.go:301] handling current node
	I1002 07:00:06.725422       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:06.725457       1 main.go:301] handling current node
	I1002 07:00:16.722931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:16.722968       1 main.go:301] handling current node
	I1002 07:00:26.723365       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:26.723401       1 main.go:301] handling current node
	I1002 07:00:36.725391       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:36.725429       1 main.go:301] handling current node
	I1002 07:00:46.731290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:46.731393       1 main.go:301] handling current node
	I1002 07:00:56.723222       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:00:56.723258       1 main.go:301] handling current node
	I1002 07:01:06.722803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:01:06.722842       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7d5d1641aee0712674398096e96919d3b125a32fedea7425f03406a609a25f01] <==
	I1002 06:55:13.375964       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1002 06:55:13.513127       1 handler.go:285] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:13.858494       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": service "volcano-admission-service" not found
	I1002 06:55:13.893899       1 handler.go:285] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I1002 06:55:14.147786       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 06:55:14.185061       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 06:55:14.209779       1 handler.go:285] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I1002 06:55:14.267834       1 handler.go:285] Adding GroupVersion topology.volcano.sh v1alpha1 to ResourceManager
	I1002 06:55:14.283374       1 handler.go:285] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I1002 06:55:14.505707       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:14.840606       1 cacher.go:182] Terminating all watchers from cacher commands.bus.volcano.sh
	I1002 06:55:14.931919       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:14.932179       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1002 06:55:15.053984       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	I1002 06:55:15.149713       1 handler.go:285] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W1002 06:55:15.287278       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1002 06:55:15.371538       1 cacher.go:182] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W1002 06:55:15.395965       1 cacher.go:182] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W1002 06:55:15.429512       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1002 06:55:16.150162       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1002 06:55:16.557861       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1002 06:55:34.021110       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57690: use of closed network connection
	E1002 06:55:34.271460       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57730: use of closed network connection
	E1002 06:55:34.454019       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:57748: use of closed network connection
	I1002 06:57:07.469062       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [dd74ed9d21ed14fc6778ffc7add04a70910ec955742f31d4442b2c07c8ea86db] <==
	E1002 07:00:08.983395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:11.106341       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:11.107704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:18.632090       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:18.633252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:21.761349       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:21.762505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:26.137163       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:26.138280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:30.161884       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:30.163436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:35.949439       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:35.950550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:41.098489       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:41.099534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:46.996694       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:46.999483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:48.412982       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:48.414132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:53.077427       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:53.078809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:00:56.948008       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:00:56.949242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1002 07:01:05.777309       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1002 07:01:05.778589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [205ba78bdcdf484d8af0d0330d3a99ba39bdc20efa19428202c6c4cd7dfd9d33] <==
	I1002 06:37:16.426570       1 server_linux.go:53] "Using iptables proxy"
	I1002 06:37:16.498503       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 06:37:16.599091       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 06:37:16.599151       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 06:37:16.599225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 06:37:16.664219       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 06:37:16.664277       1 server_linux.go:132] "Using iptables Proxier"
	I1002 06:37:16.670034       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 06:37:16.670375       1 server.go:527] "Version info" version="v1.34.1"
	I1002 06:37:16.670399       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:37:16.671951       1 config.go:200] "Starting service config controller"
	I1002 06:37:16.671975       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 06:37:16.671996       1 config.go:106] "Starting endpoint slice config controller"
	I1002 06:37:16.672007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 06:37:16.672023       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 06:37:16.672032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 06:37:16.676259       1 config.go:309] "Starting node config controller"
	I1002 06:37:16.676302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 06:37:16.676311       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 06:37:16.772116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 06:37:16.772157       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 06:37:16.772192       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b56ea6dbe0e218561ee35e4169c6c63e3160ecf828f68ed8b40ef0285f668b5e] <==
	I1002 06:37:08.294088       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 06:37:08.297839       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 06:37:08.298569       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 06:37:08.301736       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 06:37:08.302088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 06:37:08.302287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1002 06:37:08.298598       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 06:37:08.303874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 06:37:08.304074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 06:37:08.304269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 06:37:08.304471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 06:37:08.308085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 06:37:08.317169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 06:37:08.317571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 06:37:08.317827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 06:37:08.317882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 06:37:08.317917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 06:37:08.317998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 06:37:08.318060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 06:37:08.325459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 06:37:08.325531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 06:37:08.325571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 06:37:08.325620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 06:37:08.325676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1002 06:37:09.602936       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:00:22 addons-110926 kubelet[1456]: I1002 07:00:22.381239    1456 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-278mv\" (UniqueName: \"kubernetes.io/projected/74efd4c2-e361-4ea7-a094-49e83409542d-kube-api-access-278mv\") on node \"addons-110926\" DevicePath \"\""
	Oct 02 07:00:23 addons-110926 kubelet[1456]: I1002 07:00:23.243747    1456 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-926mp" secret="" err="secret \"gcp-auth\" not found"
	Oct 02 07:00:23 addons-110926 kubelet[1456]: E1002 07:00:23.247538    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-s7sx5" podUID="0b84bec7-8d9d-4d30-9860-3d491871c922"
	Oct 02 07:00:24 addons-110926 kubelet[1456]: I1002 07:00:24.247073    1456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74efd4c2-e361-4ea7-a094-49e83409542d" path="/var/lib/kubelet/pods/74efd4c2-e361-4ea7-a094-49e83409542d/volumes"
	Oct 02 07:00:25 addons-110926 kubelet[1456]: E1002 07:00:25.506108    1456 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 02 07:00:25 addons-110926 kubelet[1456]: E1002 07:00:25.506204    1456 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b84bec7-8d9d-4d30-9860-3d491871c922-gcr-creds podName:0b84bec7-8d9d-4d30-9860-3d491871c922 nodeName:}" failed. No retries permitted until 2025-10-02 07:02:27.506186878 +0000 UTC m=+1517.405390473 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0b84bec7-8d9d-4d30-9860-3d491871c922-gcr-creds") pod "registry-creds-764b6fb674-s7sx5" (UID: "0b84bec7-8d9d-4d30-9860-3d491871c922") : secret "registry-creds-gcr" not found
	Oct 02 07:00:29 addons-110926 kubelet[1456]: E1002 07:00:29.243743    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:00:36 addons-110926 kubelet[1456]: E1002 07:00:36.246250    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:00:40 addons-110926 kubelet[1456]: E1002 07:00:40.245388    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:00:51 addons-110926 kubelet[1456]: E1002 07:00:51.244900    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:00:52 addons-110926 kubelet[1456]: I1002 07:00:52.118502    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/bd253650-5e7f-4e53-be29-d3d7b4a65573-data\") pod \"helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0\" (UID: \"bd253650-5e7f-4e53-be29-d3d7b4a65573\") " pod="local-path-storage/helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0"
	Oct 02 07:00:52 addons-110926 kubelet[1456]: I1002 07:00:52.118560    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/bd253650-5e7f-4e53-be29-d3d7b4a65573-script\") pod \"helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0\" (UID: \"bd253650-5e7f-4e53-be29-d3d7b4a65573\") " pod="local-path-storage/helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0"
	Oct 02 07:00:52 addons-110926 kubelet[1456]: I1002 07:00:52.118595    1456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4g2l\" (UniqueName: \"kubernetes.io/projected/bd253650-5e7f-4e53-be29-d3d7b4a65573-kube-api-access-t4g2l\") pod \"helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0\" (UID: \"bd253650-5e7f-4e53-be29-d3d7b4a65573\") " pod="local-path-storage/helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0"
	Oct 02 07:00:53 addons-110926 kubelet[1456]: E1002 07:00:53.047430    1456 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 07:00:53 addons-110926 kubelet[1456]: E1002 07:00:53.047496    1456 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 07:00:53 addons-110926 kubelet[1456]: E1002 07:00:53.047610    1456 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0_local-path-storage(bd253650-5e7f-4e53-be29-d3d7b4a65573): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:00:53 addons-110926 kubelet[1456]: E1002 07:00:53.048827    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0" podUID="bd253650-5e7f-4e53-be29-d3d7b4a65573"
	Oct 02 07:00:53 addons-110926 kubelet[1456]: E1002 07:00:53.118123    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0" podUID="bd253650-5e7f-4e53-be29-d3d7b4a65573"
	Oct 02 07:00:53 addons-110926 kubelet[1456]: E1002 07:00:53.244374    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	Oct 02 07:01:03 addons-110926 kubelet[1456]: E1002 07:01:03.244564    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/minikube-ingress-dns/manifests/sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kube-ingress-dns-minikube" podUID="ef8b2745-553d-44a6-984e-b4ab801f79f7"
	Oct 02 07:01:06 addons-110926 kubelet[1456]: E1002 07:01:06.697814    1456 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 07:01:06 addons-110926 kubelet[1456]: E1002 07:01:06.697873    1456 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Oct 02 07:01:06 addons-110926 kubelet[1456]: E1002 07:01:06.697954    1456 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0_local-path-storage(bd253650-5e7f-4e53-be29-d3d7b4a65573): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:01:06 addons-110926 kubelet[1456]: E1002 07:01:06.697994    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0" podUID="bd253650-5e7f-4e53-be29-d3d7b4a65573"
	Oct 02 07:01:07 addons-110926 kubelet[1456]: E1002 07:01:07.243758    1456 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d44826ef-2b9b-4f5d-900f-49f95628e1f7"
	
	
	==> storage-provisioner [4829c9264d5b3ae1fc764ede230e33d7252374c2ec8cd6385777a58debef5783] <==
	W1002 07:00:42.779474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:44.782561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:44.787990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:46.791809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:46.796573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:48.799338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:48.806251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:50.809059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:50.815745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:52.819063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:52.823542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:54.826855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:54.832557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:56.835741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:56.840368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:58.843783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:00:58.850980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:00.854104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:00.858604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:02.862308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:02.869084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:04.876784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:04.882033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:06.886287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:01:06.890473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-110926 -n addons-110926
helpers_test.go:269: (dbg) Run:  kubectl --context addons-110926 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-110926 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-110926 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0: exit status 1 (112.950715ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-110926/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 06:56:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mn5jj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mn5jj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m53s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-110926
	  Normal   Pulling    116s (x5 over 4m53s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     116s (x5 over 4m52s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     116s (x5 over 4m52s)  kubelet            Error: ErrImagePull
	  Warning  Failed     67s (x15 over 4m52s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x20 over 4m52s)   kubelet            Back-off pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-km9d9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-km9d9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lw8gl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bq878" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-s7sx5" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-110926 describe pod task-pv-pod test-local-path ingress-nginx-admission-create-lw8gl ingress-nginx-admission-patch-bq878 kube-ingress-dns-minikube registry-creds-764b6fb674-s7sx5 helper-pod-create-pvc-87f47409-69e1-4546-891a-797948b58aa0: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.046350743s)
--- FAIL: TestAddons/parallel/LocalPath (346.01s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.03s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-330767 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-330767 --driver=docker  --container-runtime=containerd: (30.027883305s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-330767"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-330767": (1.042361403s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CVVGtAEH6WWn/agent.852453" SSH_AGENT_PID="852454" DOCKER_HOST=ssh://docker@127.0.0.1:33868 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CVVGtAEH6WWn/agent.852453" SSH_AGENT_PID="852454" DOCKER_HOST=ssh://docker@127.0.0.1:33868 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CVVGtAEH6WWn/agent.852453" SSH_AGENT_PID="852454" DOCKER_HOST=ssh://docker@127.0.0.1:33868 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (921.183651ms)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-CVVGtAEH6WWn/agent.852453" SSH_AGENT_PID="852454" DOCKER_HOST=ssh://docker@127.0.0.1:33868 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-10-02 07:11:49.541240793 +0000 UTC m=+2140.428588130
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-330767
helpers_test.go:243: (dbg) docker inspect dockerenv-330767:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0a26947cc4d56160d0d954677bbc3c7bb5f9cc4dc577ced10a33cf6909a71d23",
	        "Created": "2025-10-02T07:11:11.455321502Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 850124,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:11:11.518025972Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/0a26947cc4d56160d0d954677bbc3c7bb5f9cc4dc577ced10a33cf6909a71d23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0a26947cc4d56160d0d954677bbc3c7bb5f9cc4dc577ced10a33cf6909a71d23/hostname",
	        "HostsPath": "/var/lib/docker/containers/0a26947cc4d56160d0d954677bbc3c7bb5f9cc4dc577ced10a33cf6909a71d23/hosts",
	        "LogPath": "/var/lib/docker/containers/0a26947cc4d56160d0d954677bbc3c7bb5f9cc4dc577ced10a33cf6909a71d23/0a26947cc4d56160d0d954677bbc3c7bb5f9cc4dc577ced10a33cf6909a71d23-json.log",
	        "Name": "/dockerenv-330767",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-330767:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "dockerenv-330767",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0a26947cc4d56160d0d954677bbc3c7bb5f9cc4dc577ced10a33cf6909a71d23",
	                "LowerDir": "/var/lib/docker/overlay2/71d189ebb4c1982c3878c4e59335d4fabbf31c5b3d5fc79c46464255858ca1c4-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/71d189ebb4c1982c3878c4e59335d4fabbf31c5b3d5fc79c46464255858ca1c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/71d189ebb4c1982c3878c4e59335d4fabbf31c5b3d5fc79c46464255858ca1c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/71d189ebb4c1982c3878c4e59335d4fabbf31c5b3d5fc79c46464255858ca1c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-330767",
	                "Source": "/var/lib/docker/volumes/dockerenv-330767/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-330767",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-330767",
	                "name.minikube.sigs.k8s.io": "dockerenv-330767",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7199add1d0f77ef75b87ab4f13cb69e1f1f4a305d61d58d86486bcfa54f07dc2",
	            "SandboxKey": "/var/run/docker/netns/7199add1d0f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-330767": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:e6:56:b3:65:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a95729c5151b84f34a4583a82033f86ed13821267f1d82b935c788978731022f",
	                    "EndpointID": "b21075e302d54750431dd235e04a410e9fcf98ca3ef0e364924bdf48840ae85b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-330767",
	                        "0a26947cc4d5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p dockerenv-330767 -n dockerenv-330767
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p dockerenv-330767 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p dockerenv-330767 logs -n 25: (1.333573102s)
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                      ARGS                                       │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-110926 addons disable volcano --alsologtostderr -v=1                     │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons     │ addons-110926 addons disable gcp-auth --alsologtostderr -v=1                    │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons     │ addons-110926 addons disable yakd --alsologtostderr -v=1                        │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ ip         │ addons-110926 ip                                                                │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons     │ addons-110926 addons disable registry --alsologtostderr -v=1                    │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 06:55 UTC │ 02 Oct 25 06:55 UTC │
	│ addons     │ addons-110926 addons disable nvidia-device-plugin --alsologtostderr -v=1        │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 06:56 UTC │ 02 Oct 25 06:56 UTC │
	│ addons     │ addons-110926 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons     │ addons-110926 addons disable cloud-spanner --alsologtostderr -v=1               │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons     │ enable headlamp -p addons-110926 --alsologtostderr -v=1                         │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:01 UTC │ 02 Oct 25 07:01 UTC │
	│ addons     │ addons-110926 addons disable volumesnapshots --alsologtostderr -v=1             │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons     │ addons-110926 addons disable csi-hostpath-driver --alsologtostderr -v=1         │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons     │ addons-110926 addons disable headlamp --alsologtostderr -v=1                    │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons     │ addons-110926 addons disable inspektor-gadget --alsologtostderr -v=1            │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons     │ addons-110926 addons disable metrics-server --alsologtostderr -v=1              │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-110926  │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons     │ addons-110926 addons disable registry-creds --alsologtostderr -v=1              │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:02 UTC │ 02 Oct 25 07:02 UTC │
	│ addons     │ addons-110926 addons disable ingress-dns --alsologtostderr -v=1                 │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:10 UTC │ 02 Oct 25 07:10 UTC │
	│ addons     │ addons-110926 addons disable ingress --alsologtostderr -v=1                     │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:10 UTC │ 02 Oct 25 07:10 UTC │
	│ stop       │ -p addons-110926                                                                │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:10 UTC │ 02 Oct 25 07:11 UTC │
	│ addons     │ enable dashboard -p addons-110926                                               │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ addons     │ disable dashboard -p addons-110926                                              │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ addons     │ disable gvisor -p addons-110926                                                 │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ delete     │ -p addons-110926                                                                │ addons-110926    │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ start      │ -p dockerenv-330767 --driver=docker  --container-runtime=containerd             │ dockerenv-330767 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-330767                                        │ dockerenv-330767 │ jenkins │ v1.37.0 │ 02 Oct 25 07:11 UTC │ 02 Oct 25 07:11 UTC │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:11:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:11:06.113286  849727 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:11:06.113392  849727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:06.113396  849727 out.go:374] Setting ErrFile to fd 2...
	I1002 07:11:06.113400  849727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:11:06.113680  849727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:11:06.114049  849727 out.go:368] Setting JSON to false
	I1002 07:11:06.114883  849727 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24816,"bootTime":1759364251,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 07:11:06.114937  849727 start.go:140] virtualization:  
	I1002 07:11:06.122334  849727 out.go:179] * [dockerenv-330767] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:11:06.126216  849727 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:11:06.126256  849727 notify.go:220] Checking for updates...
	I1002 07:11:06.129901  849727 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:11:06.133482  849727 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:11:06.137023  849727 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 07:11:06.140334  849727 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:11:06.143610  849727 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:11:06.147098  849727 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:11:06.180993  849727 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:11:06.181154  849727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:06.244053  849727 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:11:06.234056606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:11:06.244150  849727 docker.go:318] overlay module found
	I1002 07:11:06.247615  849727 out.go:179] * Using the docker driver based on user configuration
	I1002 07:11:06.250682  849727 start.go:304] selected driver: docker
	I1002 07:11:06.250699  849727 start.go:924] validating driver "docker" against <nil>
	I1002 07:11:06.250711  849727 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:11:06.250865  849727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:11:06.305655  849727 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-02 07:11:06.296068999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:11:06.305801  849727 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 07:11:06.306087  849727 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 07:11:06.306228  849727 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 07:11:06.309369  849727 out.go:179] * Using Docker driver with root privileges
	I1002 07:11:06.312282  849727 cni.go:84] Creating CNI manager for ""
	I1002 07:11:06.312351  849727 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:11:06.312358  849727 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 07:11:06.312439  849727 start.go:348] cluster config:
	{Name:dockerenv-330767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-330767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:06.315493  849727 out.go:179] * Starting "dockerenv-330767" primary control-plane node in "dockerenv-330767" cluster
	I1002 07:11:06.318431  849727 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 07:11:06.321498  849727 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:11:06.324417  849727 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 07:11:06.324474  849727 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 07:11:06.324498  849727 cache.go:58] Caching tarball of preloaded images
	I1002 07:11:06.324504  849727 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:11:06.324583  849727 preload.go:233] Found /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 07:11:06.324592  849727 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 07:11:06.324954  849727 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/config.json ...
	I1002 07:11:06.324974  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/config.json: {Name:mk3ab1b9dc68796e21605c17072fbcb76bc63a1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:06.349420  849727 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:11:06.349432  849727 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:11:06.349451  849727 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:11:06.349471  849727 start.go:360] acquireMachinesLock for dockerenv-330767: {Name:mkc13dc22fe91d361bf6a530db9316d216380932 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:11:06.349591  849727 start.go:364] duration metric: took 104.326µs to acquireMachinesLock for "dockerenv-330767"
	I1002 07:11:06.349615  849727 start.go:93] Provisioning new machine with config: &{Name:dockerenv-330767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-330767 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 07:11:06.349681  849727 start.go:125] createHost starting for "" (driver="docker")
	I1002 07:11:06.355018  849727 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 07:11:06.355237  849727 start.go:159] libmachine.API.Create for "dockerenv-330767" (driver="docker")
	I1002 07:11:06.355281  849727 client.go:168] LocalClient.Create starting
	I1002 07:11:06.355377  849727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem
	I1002 07:11:06.355411  849727 main.go:141] libmachine: Decoding PEM data...
	I1002 07:11:06.355424  849727 main.go:141] libmachine: Parsing certificate...
	I1002 07:11:06.355481  849727 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem
	I1002 07:11:06.355497  849727 main.go:141] libmachine: Decoding PEM data...
	I1002 07:11:06.355505  849727 main.go:141] libmachine: Parsing certificate...
	I1002 07:11:06.355847  849727 cli_runner.go:164] Run: docker network inspect dockerenv-330767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 07:11:06.371352  849727 cli_runner.go:211] docker network inspect dockerenv-330767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 07:11:06.371431  849727 network_create.go:284] running [docker network inspect dockerenv-330767] to gather additional debugging logs...
	I1002 07:11:06.371449  849727 cli_runner.go:164] Run: docker network inspect dockerenv-330767
	W1002 07:11:06.387851  849727 cli_runner.go:211] docker network inspect dockerenv-330767 returned with exit code 1
	I1002 07:11:06.387871  849727 network_create.go:287] error running [docker network inspect dockerenv-330767]: docker network inspect dockerenv-330767: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-330767 not found
	I1002 07:11:06.387894  849727 network_create.go:289] output of [docker network inspect dockerenv-330767]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-330767 not found
	
	** /stderr **
	I1002 07:11:06.387987  849727 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:11:06.405166  849727 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400198a820}
	I1002 07:11:06.405199  849727 network_create.go:124] attempt to create docker network dockerenv-330767 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 07:11:06.405257  849727 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-330767 dockerenv-330767
	I1002 07:11:06.458706  849727 network_create.go:108] docker network dockerenv-330767 192.168.49.0/24 created
	I1002 07:11:06.458728  849727 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-330767" container
	I1002 07:11:06.458814  849727 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 07:11:06.473931  849727 cli_runner.go:164] Run: docker volume create dockerenv-330767 --label name.minikube.sigs.k8s.io=dockerenv-330767 --label created_by.minikube.sigs.k8s.io=true
	I1002 07:11:06.491615  849727 oci.go:103] Successfully created a docker volume dockerenv-330767
	I1002 07:11:06.491685  849727 cli_runner.go:164] Run: docker run --rm --name dockerenv-330767-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-330767 --entrypoint /usr/bin/test -v dockerenv-330767:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 07:11:07.005180  849727 oci.go:107] Successfully prepared a docker volume dockerenv-330767
	I1002 07:11:07.005214  849727 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 07:11:07.005233  849727 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 07:11:07.005300  849727 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-330767:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 07:11:11.387360  849727 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-330767:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.382025884s)
	I1002 07:11:11.387382  849727 kic.go:203] duration metric: took 4.382145915s to extract preloaded images to volume ...
	W1002 07:11:11.387797  849727 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1002 07:11:11.387891  849727 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 07:11:11.440797  849727 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-330767 --name dockerenv-330767 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-330767 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-330767 --network dockerenv-330767 --ip 192.168.49.2 --volume dockerenv-330767:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 07:11:11.732319  849727 cli_runner.go:164] Run: docker container inspect dockerenv-330767 --format={{.State.Running}}
	I1002 07:11:11.758336  849727 cli_runner.go:164] Run: docker container inspect dockerenv-330767 --format={{.State.Status}}
	I1002 07:11:11.783363  849727 cli_runner.go:164] Run: docker exec dockerenv-330767 stat /var/lib/dpkg/alternatives/iptables
	I1002 07:11:11.834198  849727 oci.go:144] the created container "dockerenv-330767" has a running status.
	I1002 07:11:11.834235  849727 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa...
	I1002 07:11:12.108724  849727 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 07:11:12.142907  849727 cli_runner.go:164] Run: docker container inspect dockerenv-330767 --format={{.State.Status}}
	I1002 07:11:12.159853  849727 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 07:11:12.159864  849727 kic_runner.go:114] Args: [docker exec --privileged dockerenv-330767 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 07:11:12.223767  849727 cli_runner.go:164] Run: docker container inspect dockerenv-330767 --format={{.State.Status}}
	I1002 07:11:12.257868  849727 machine.go:93] provisionDockerMachine start ...
	I1002 07:11:12.257959  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:12.290032  849727 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:12.290375  849727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1002 07:11:12.290383  849727 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:11:12.290977  849727 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56136->127.0.0.1:33868: read: connection reset by peer
	I1002 07:11:15.424263  849727 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-330767
	
	I1002 07:11:15.424276  849727 ubuntu.go:182] provisioning hostname "dockerenv-330767"
	I1002 07:11:15.424354  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:15.441419  849727 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:15.441752  849727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1002 07:11:15.441762  849727 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-330767 && echo "dockerenv-330767" | sudo tee /etc/hostname
	I1002 07:11:15.588200  849727 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-330767
	
	I1002 07:11:15.588294  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:15.605710  849727 main.go:141] libmachine: Using SSH client type: native
	I1002 07:11:15.606008  849727 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33868 <nil> <nil>}
	I1002 07:11:15.606026  849727 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-330767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-330767/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-330767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:11:15.737059  849727 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:11:15.737077  849727 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-811293/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-811293/.minikube}
	I1002 07:11:15.737110  849727 ubuntu.go:190] setting up certificates
	I1002 07:11:15.737119  849727 provision.go:84] configureAuth start
	I1002 07:11:15.737186  849727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-330767
	I1002 07:11:15.755698  849727 provision.go:143] copyHostCerts
	I1002 07:11:15.755757  849727 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem, removing ...
	I1002 07:11:15.755765  849727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem
	I1002 07:11:15.755848  849727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem (1123 bytes)
	I1002 07:11:15.755952  849727 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem, removing ...
	I1002 07:11:15.755956  849727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem
	I1002 07:11:15.755982  849727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem (1679 bytes)
	I1002 07:11:15.756042  849727 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem, removing ...
	I1002 07:11:15.756045  849727 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem
	I1002 07:11:15.756073  849727 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem (1078 bytes)
	I1002 07:11:15.756123  849727 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem org=jenkins.dockerenv-330767 san=[127.0.0.1 192.168.49.2 dockerenv-330767 localhost minikube]
	I1002 07:11:16.238056  849727 provision.go:177] copyRemoteCerts
	I1002 07:11:16.238117  849727 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:11:16.238170  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:16.255143  849727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa Username:docker}
	I1002 07:11:16.352414  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 07:11:16.369129  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:11:16.389150  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1002 07:11:16.409868  849727 provision.go:87] duration metric: took 672.722259ms to configureAuth
	I1002 07:11:16.409886  849727 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:11:16.410072  849727 config.go:182] Loaded profile config "dockerenv-330767": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:11:16.410078  849727 machine.go:96] duration metric: took 4.152200178s to provisionDockerMachine
	I1002 07:11:16.410083  849727 client.go:171] duration metric: took 10.05479785s to LocalClient.Create
	I1002 07:11:16.410093  849727 start.go:167] duration metric: took 10.054857886s to libmachine.API.Create "dockerenv-330767"
	I1002 07:11:16.410098  849727 start.go:293] postStartSetup for "dockerenv-330767" (driver="docker")
	I1002 07:11:16.410106  849727 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:11:16.410157  849727 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:11:16.410194  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:16.426800  849727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa Username:docker}
	I1002 07:11:16.525173  849727 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:11:16.528960  849727 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:11:16.528978  849727 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:11:16.528988  849727 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/addons for local assets ...
	I1002 07:11:16.529042  849727 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/files for local assets ...
	I1002 07:11:16.529061  849727 start.go:296] duration metric: took 118.957605ms for postStartSetup
	I1002 07:11:16.529392  849727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-330767
	I1002 07:11:16.546088  849727 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/config.json ...
	I1002 07:11:16.546354  849727 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:11:16.546391  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:16.562872  849727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa Username:docker}
	I1002 07:11:16.654151  849727 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:11:16.658767  849727 start.go:128] duration metric: took 10.309073537s to createHost
	I1002 07:11:16.658782  849727 start.go:83] releasing machines lock for "dockerenv-330767", held for 10.309183492s
	I1002 07:11:16.658851  849727 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-330767
	I1002 07:11:16.675023  849727 ssh_runner.go:195] Run: cat /version.json
	I1002 07:11:16.675067  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:16.675308  849727 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:11:16.675377  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:16.696220  849727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa Username:docker}
	I1002 07:11:16.716866  849727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa Username:docker}
	I1002 07:11:16.796472  849727 ssh_runner.go:195] Run: systemctl --version
	I1002 07:11:16.899483  849727 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:11:16.904082  849727 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:11:16.904163  849727 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:11:16.931548  849727 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1002 07:11:16.931562  849727 start.go:495] detecting cgroup driver to use...
	I1002 07:11:16.931592  849727 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:11:16.931648  849727 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 07:11:16.946622  849727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 07:11:16.959502  849727 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:11:16.959555  849727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:11:16.976882  849727 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:11:16.993660  849727 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:11:17.112823  849727 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:11:17.237101  849727 docker.go:234] disabling docker service ...
	I1002 07:11:17.237155  849727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:11:17.260920  849727 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:11:17.274531  849727 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:11:17.396521  849727 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:11:17.514896  849727 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:11:17.527841  849727 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:11:17.542327  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 07:11:17.550760  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 07:11:17.558933  849727 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 07:11:17.558987  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 07:11:17.567101  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:11:17.575955  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 07:11:17.584431  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:11:17.593132  849727 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:11:17.601074  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 07:11:17.609412  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 07:11:17.618493  849727 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 07:11:17.626824  849727 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:11:17.633838  849727 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:11:17.641104  849727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:17.755003  849727 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 07:11:17.875236  849727 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 07:11:17.875295  849727 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 07:11:17.879091  849727 start.go:563] Will wait 60s for crictl version
	I1002 07:11:17.879147  849727 ssh_runner.go:195] Run: which crictl
	I1002 07:11:17.882619  849727 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:11:17.914731  849727 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 07:11:17.914797  849727 ssh_runner.go:195] Run: containerd --version
	I1002 07:11:17.939485  849727 ssh_runner.go:195] Run: containerd --version
	I1002 07:11:17.966050  849727 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 07:11:17.968936  849727 cli_runner.go:164] Run: docker network inspect dockerenv-330767 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:11:17.984390  849727 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:11:17.988367  849727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:17.998194  849727 kubeadm.go:883] updating cluster {Name:dockerenv-330767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-330767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:11:17.998311  849727 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 07:11:17.998394  849727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:18.030336  849727 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 07:11:18.030348  849727 containerd.go:534] Images already preloaded, skipping extraction
	I1002 07:11:18.030417  849727 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:11:18.056965  849727 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 07:11:18.056977  849727 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:11:18.056985  849727 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1002 07:11:18.058486  849727 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-330767 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:dockerenv-330767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:11:18.058576  849727 ssh_runner.go:195] Run: sudo crictl info
	I1002 07:11:18.087041  849727 cni.go:84] Creating CNI manager for ""
	I1002 07:11:18.087052  849727 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:11:18.087067  849727 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:11:18.087092  849727 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-330767 NodeName:dockerenv-330767 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:11:18.087211  849727 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-330767"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:11:18.087282  849727 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:11:18.095723  849727 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:11:18.095787  849727 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:11:18.103986  849727 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1002 07:11:18.116566  849727 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:11:18.129569  849727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1002 07:11:18.141968  849727 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:11:18.145390  849727 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 07:11:18.154536  849727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:18.269440  849727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:18.286788  849727 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767 for IP: 192.168.49.2
	I1002 07:11:18.286799  849727 certs.go:195] generating shared ca certs ...
	I1002 07:11:18.286813  849727 certs.go:227] acquiring lock for ca certs: {Name:mk33b75296d4c02eee9bab3e9582ce8896a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:18.286953  849727 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key
	I1002 07:11:18.286996  849727 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key
	I1002 07:11:18.287002  849727 certs.go:257] generating profile certs ...
	I1002 07:11:18.287055  849727 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/client.key
	I1002 07:11:18.287065  849727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/client.crt with IP's: []
	I1002 07:11:18.453913  849727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/client.crt ...
	I1002 07:11:18.453930  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/client.crt: {Name:mkd2e2934dc1ff5fc9e6b45e2c7c153011388f35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:18.454827  849727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/client.key ...
	I1002 07:11:18.454836  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/client.key: {Name:mk894605c2beb166109d1145e2b16a0c1f7f4d8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:18.454944  849727 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.key.f803928a
	I1002 07:11:18.454955  849727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.crt.f803928a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 07:11:18.622743  849727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.crt.f803928a ...
	I1002 07:11:18.622766  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.crt.f803928a: {Name:mk16907ccd2ecf95a1ce72b361aa3d05dd230600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:18.623562  849727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.key.f803928a ...
	I1002 07:11:18.623574  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.key.f803928a: {Name:mkc3886b02a8d25c8f8d431335dbd052f5921af6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:18.624260  849727 certs.go:382] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.crt.f803928a -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.crt
	I1002 07:11:18.624339  849727 certs.go:386] copying /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.key.f803928a -> /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.key
	I1002 07:11:18.624461  849727 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.key
	I1002 07:11:18.624481  849727 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.crt with IP's: []
	I1002 07:11:18.923001  849727 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.crt ...
	I1002 07:11:18.923019  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.crt: {Name:mk6c90bb2b1a12a727a720bdc9cf115d07bc6821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:18.923225  849727 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.key ...
	I1002 07:11:18.923234  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.key: {Name:mk577067e2a69323c50d8f8d5b513be5580dfb66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:18.924079  849727 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:11:18.924114  849727 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:11:18.924136  849727 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:11:18.924155  849727 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem (1679 bytes)
	I1002 07:11:18.924699  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:11:18.943070  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 07:11:18.961669  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:11:18.978846  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 07:11:18.996942  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 07:11:19.016201  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:11:19.033483  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:11:19.050604  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/dockerenv-330767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:11:19.070107  849727 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:11:19.087255  849727 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:11:19.099437  849727 ssh_runner.go:195] Run: openssl version
	I1002 07:11:19.105691  849727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:11:19.114027  849727 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:19.117770  849727 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:19.117824  849727 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:11:19.162295  849727 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:11:19.170476  849727 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:11:19.173692  849727 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 07:11:19.173731  849727 kubeadm.go:400] StartCluster: {Name:dockerenv-330767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-330767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:11:19.173796  849727 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 07:11:19.173848  849727 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:11:19.201918  849727 cri.go:89] found id: ""
	I1002 07:11:19.201984  849727 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:11:19.209493  849727 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:11:19.216666  849727 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 07:11:19.216729  849727 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:11:19.224365  849727 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 07:11:19.224373  849727 kubeadm.go:157] found existing configuration files:
	
	I1002 07:11:19.224448  849727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 07:11:19.232105  849727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 07:11:19.232156  849727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 07:11:19.239426  849727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 07:11:19.246825  849727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 07:11:19.246891  849727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:11:19.254617  849727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 07:11:19.261941  849727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 07:11:19.262004  849727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:11:19.269240  849727 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 07:11:19.276544  849727 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 07:11:19.276607  849727 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:11:19.283527  849727 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 07:11:19.324542  849727 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 07:11:19.324837  849727 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 07:11:19.346559  849727 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 07:11:19.346625  849727 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1002 07:11:19.346660  849727 kubeadm.go:318] OS: Linux
	I1002 07:11:19.346706  849727 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 07:11:19.346756  849727 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1002 07:11:19.346811  849727 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 07:11:19.346861  849727 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 07:11:19.346911  849727 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 07:11:19.346960  849727 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 07:11:19.347006  849727 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 07:11:19.347055  849727 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 07:11:19.347102  849727 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1002 07:11:19.412483  849727 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 07:11:19.412590  849727 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 07:11:19.412685  849727 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 07:11:19.417936  849727 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 07:11:19.424581  849727 out.go:252]   - Generating certificates and keys ...
	I1002 07:11:19.424680  849727 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 07:11:19.424787  849727 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 07:11:19.771867  849727 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 07:11:21.004635  849727 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 07:11:21.476977  849727 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 07:11:21.887862  849727 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 07:11:22.425123  849727 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 07:11:22.425425  849727 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [dockerenv-330767 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 07:11:22.525970  849727 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 07:11:22.526262  849727 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-330767 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 07:11:22.811870  849727 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 07:11:22.973563  849727 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 07:11:23.223854  849727 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 07:11:23.224127  849727 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 07:11:23.528382  849727 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 07:11:24.505315  849727 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 07:11:24.772222  849727 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 07:11:25.239100  849727 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 07:11:25.692825  849727 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 07:11:25.693550  849727 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 07:11:25.696412  849727 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 07:11:25.699974  849727 out.go:252]   - Booting up control plane ...
	I1002 07:11:25.700094  849727 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 07:11:25.700174  849727 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 07:11:25.701982  849727 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 07:11:25.720750  849727 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 07:11:25.720932  849727 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 07:11:25.728237  849727 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 07:11:25.728510  849727 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 07:11:25.728731  849727 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 07:11:25.862744  849727 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 07:11:25.862906  849727 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 07:11:27.363755  849727 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501288863s
	I1002 07:11:27.367343  849727 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 07:11:27.367438  849727 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 07:11:27.367530  849727 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 07:11:27.367610  849727 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 07:11:29.794994  849727 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.427273642s
	I1002 07:11:31.817814  849727 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.450418177s
	I1002 07:11:32.869694  849727 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502181698s
	I1002 07:11:32.889060  849727 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1002 07:11:32.904574  849727 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1002 07:11:32.922898  849727 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1002 07:11:32.923097  849727 kubeadm.go:318] [mark-control-plane] Marking the node dockerenv-330767 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1002 07:11:32.936014  849727 kubeadm.go:318] [bootstrap-token] Using token: eddh59.5o2ofvmdbj2k2rzk
	I1002 07:11:32.938777  849727 out.go:252]   - Configuring RBAC rules ...
	I1002 07:11:32.938915  849727 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1002 07:11:32.943207  849727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1002 07:11:32.951323  849727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1002 07:11:32.957462  849727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1002 07:11:32.962917  849727 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1002 07:11:32.969086  849727 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1002 07:11:33.275992  849727 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1002 07:11:33.721262  849727 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1002 07:11:34.276706  849727 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1002 07:11:34.277983  849727 kubeadm.go:318] 
	I1002 07:11:34.278067  849727 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1002 07:11:34.278071  849727 kubeadm.go:318] 
	I1002 07:11:34.278151  849727 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1002 07:11:34.278154  849727 kubeadm.go:318] 
	I1002 07:11:34.278179  849727 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1002 07:11:34.278239  849727 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1002 07:11:34.278300  849727 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1002 07:11:34.278304  849727 kubeadm.go:318] 
	I1002 07:11:34.278359  849727 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1002 07:11:34.278362  849727 kubeadm.go:318] 
	I1002 07:11:34.278410  849727 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1002 07:11:34.278421  849727 kubeadm.go:318] 
	I1002 07:11:34.278475  849727 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1002 07:11:34.278551  849727 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1002 07:11:34.278621  849727 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1002 07:11:34.278624  849727 kubeadm.go:318] 
	I1002 07:11:34.278715  849727 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1002 07:11:34.278793  849727 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1002 07:11:34.278797  849727 kubeadm.go:318] 
	I1002 07:11:34.278883  849727 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token eddh59.5o2ofvmdbj2k2rzk \
	I1002 07:11:34.278998  849727 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 \
	I1002 07:11:34.279019  849727 kubeadm.go:318] 	--control-plane 
	I1002 07:11:34.279022  849727 kubeadm.go:318] 
	I1002 07:11:34.279109  849727 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1002 07:11:34.279112  849727 kubeadm.go:318] 
	I1002 07:11:34.279196  849727 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token eddh59.5o2ofvmdbj2k2rzk \
	I1002 07:11:34.279301  849727 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:0f8aacb396b447cb031c11378b5e47ce64b4504cee1fb58c1de20a4895abc034 
	I1002 07:11:34.283038  849727 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1002 07:11:34.283265  849727 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1002 07:11:34.283373  849727 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 07:11:34.283386  849727 cni.go:84] Creating CNI manager for ""
	I1002 07:11:34.283393  849727 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:11:34.288258  849727 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 07:11:34.291080  849727 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 07:11:34.295051  849727 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 07:11:34.295062  849727 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 07:11:34.307522  849727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 07:11:34.603874  849727 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 07:11:34.604019  849727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1002 07:11:34.604107  849727 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-330767 minikube.k8s.io/updated_at=2025_10_02T07_11_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb minikube.k8s.io/name=dockerenv-330767 minikube.k8s.io/primary=true
	I1002 07:11:34.623724  849727 ops.go:34] apiserver oom_adj: -16
	I1002 07:11:34.750021  849727 kubeadm.go:1113] duration metric: took 146.050143ms to wait for elevateKubeSystemPrivileges
	I1002 07:11:34.750039  849727 kubeadm.go:402] duration metric: took 15.576313428s to StartCluster
	I1002 07:11:34.750054  849727 settings.go:142] acquiring lock: {Name:mkfabb257d5e6dc89516b7f3eecfb5ad470245b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:34.750124  849727 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:11:34.750850  849727 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/kubeconfig: {Name:mk61b1a16c6c070d43ba1e4fed7f7f8861077db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:11:34.751066  849727 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 07:11:34.751174  849727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 07:11:34.751374  849727 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:11:34.751460  849727 config.go:182] Loaded profile config "dockerenv-330767": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:11:34.751461  849727 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-330767"
	I1002 07:11:34.751475  849727 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-330767"
	I1002 07:11:34.751490  849727 addons.go:69] Setting default-storageclass=true in profile "dockerenv-330767"
	I1002 07:11:34.751499  849727 host.go:66] Checking if "dockerenv-330767" exists ...
	I1002 07:11:34.751500  849727 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-330767"
	I1002 07:11:34.751813  849727 cli_runner.go:164] Run: docker container inspect dockerenv-330767 --format={{.State.Status}}
	I1002 07:11:34.751965  849727 cli_runner.go:164] Run: docker container inspect dockerenv-330767 --format={{.State.Status}}
	I1002 07:11:34.754296  849727 out.go:179] * Verifying Kubernetes components...
	I1002 07:11:34.757808  849727 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:11:34.801134  849727 addons.go:238] Setting addon default-storageclass=true in "dockerenv-330767"
	I1002 07:11:34.801161  849727 host.go:66] Checking if "dockerenv-330767" exists ...
	I1002 07:11:34.801571  849727 cli_runner.go:164] Run: docker container inspect dockerenv-330767 --format={{.State.Status}}
	I1002 07:11:34.803907  849727 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:11:34.806943  849727 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:34.806955  849727 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:11:34.807047  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:34.827930  849727 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:34.827943  849727 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:11:34.828001  849727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-330767
	I1002 07:11:34.864559  849727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa Username:docker}
	I1002 07:11:34.883283  849727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33868 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/dockerenv-330767/id_rsa Username:docker}
	I1002 07:11:35.117145  849727 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 07:11:35.119616  849727 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:11:35.131459  849727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:11:35.248403  849727 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:11:35.503302  849727 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1002 07:11:35.504093  849727 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:11:35.504147  849727 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:11:35.754075  849727 api_server.go:72] duration metric: took 1.002980834s to wait for apiserver process to appear ...
	I1002 07:11:35.754088  849727 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:11:35.754117  849727 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1002 07:11:35.767628  849727 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1002 07:11:35.770911  849727 api_server.go:141] control plane version: v1.34.1
	I1002 07:11:35.770931  849727 api_server.go:131] duration metric: took 16.837049ms to wait for apiserver health ...
	I1002 07:11:35.771581  849727 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:11:35.774013  849727 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 07:11:35.776952  849727 addons.go:514] duration metric: took 1.025569985s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 07:11:35.778947  849727 system_pods.go:59] 5 kube-system pods found
	I1002 07:11:35.778966  849727 system_pods.go:61] "etcd-dockerenv-330767" [7826488e-7c11-449a-bcaf-d1885ae1c3d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1002 07:11:35.778974  849727 system_pods.go:61] "kube-apiserver-dockerenv-330767" [ecc24512-8637-40f8-87ea-f98b70a6916b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:11:35.778981  849727 system_pods.go:61] "kube-controller-manager-dockerenv-330767" [0ce9c43d-ce85-4ac8-8b5b-2ce55176e4df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:11:35.778987  849727 system_pods.go:61] "kube-scheduler-dockerenv-330767" [75a4ccbc-33c1-44cd-bb33-d432744d8125] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1002 07:11:35.778991  849727 system_pods.go:61] "storage-provisioner" [5d117beb-3364-4d01-abb3-ede042d4f278] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1002 07:11:35.778996  849727 system_pods.go:74] duration metric: took 7.407331ms to wait for pod list to return data ...
	I1002 07:11:35.779005  849727 kubeadm.go:586] duration metric: took 1.02791737s to wait for: map[apiserver:true system_pods:true]
	I1002 07:11:35.779017  849727 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:11:35.781964  849727 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:11:35.782606  849727 node_conditions.go:123] node cpu capacity is 2
	I1002 07:11:35.782620  849727 node_conditions.go:105] duration metric: took 3.599742ms to run NodePressure ...
	I1002 07:11:35.782631  849727 start.go:241] waiting for startup goroutines ...
	I1002 07:11:36.009252  849727 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-330767" context rescaled to 1 replicas
	I1002 07:11:36.009285  849727 start.go:246] waiting for cluster config update ...
	I1002 07:11:36.009296  849727 start.go:255] writing updated cluster config ...
	I1002 07:11:36.009608  849727 ssh_runner.go:195] Run: rm -f paused
	I1002 07:11:36.075376  849727 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:11:36.080644  849727 out.go:179] * Done! kubectl is now configured to use "dockerenv-330767" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	6ae840d9c287f       b1a8c6f707935       10 seconds ago      Running             kindnet-cni               0                   a5c94950b5e3c       kindnet-kk8x7                              kube-system
	6761c2e99ca0f       05baa95f5142d       10 seconds ago      Running             kube-proxy                0                   19d58d3645708       kube-proxy-4f46l                           kube-system
	f9684729fd04c       43911e833d64d       22 seconds ago      Running             kube-apiserver            0                   5cbb017cc369e       kube-apiserver-dockerenv-330767            kube-system
	84a921f04c54a       b5f57ec6b9867       22 seconds ago      Running             kube-scheduler            0                   d6b4ffd535d20       kube-scheduler-dockerenv-330767            kube-system
	c2252a6cdd65c       7eb2c6ff0c5a7       22 seconds ago      Running             kube-controller-manager   0                   80b2c8ade5b2f       kube-controller-manager-dockerenv-330767   kube-system
	f023f64f06eca       a1894772a478e       22 seconds ago      Running             etcd                      0                   a3f271d4d2b41       etcd-dockerenv-330767                      kube-system
	
	
	==> containerd <==
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.737775916Z" level=info msg="CreateContainer within sandbox \"5cbb017cc369ee7370ce330b55aebb0b44bccb62e6f9bcad1f544e6fc73ee966\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.763706897Z" level=info msg="CreateContainer within sandbox \"80b2c8ade5b2f13310ee2f733f3551f81a43b4f8a0f7d6e604228dc0e5f0f81c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c2252a6cdd65c99d17b04cd0d4b20d1a00dca5ea900dd6054f5bbf550d90ec3c\""
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.764526082Z" level=info msg="StartContainer for \"c2252a6cdd65c99d17b04cd0d4b20d1a00dca5ea900dd6054f5bbf550d90ec3c\""
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.773305023Z" level=info msg="CreateContainer within sandbox \"d6b4ffd535d20725115f7647b1ae93ae631d559741c01ad049daad7231031473\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"84a921f04c54a65f99de1a8c0581f40c64408102cbaab49a010fc0ecf3331056\""
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.774100790Z" level=info msg="StartContainer for \"84a921f04c54a65f99de1a8c0581f40c64408102cbaab49a010fc0ecf3331056\""
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.790028394Z" level=info msg="StartContainer for \"f023f64f06eca0ed279daa862bda723c6080d6f30de200c4ca9760e9a5560b67\" returns successfully"
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.795926381Z" level=info msg="CreateContainer within sandbox \"5cbb017cc369ee7370ce330b55aebb0b44bccb62e6f9bcad1f544e6fc73ee966\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9684729fd04c6690cc1ecaf57004d4f96913602e28366724a06e69badd69256\""
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.800671652Z" level=info msg="StartContainer for \"f9684729fd04c6690cc1ecaf57004d4f96913602e28366724a06e69badd69256\""
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.888798370Z" level=info msg="StartContainer for \"c2252a6cdd65c99d17b04cd0d4b20d1a00dca5ea900dd6054f5bbf550d90ec3c\" returns successfully"
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.985243951Z" level=info msg="StartContainer for \"84a921f04c54a65f99de1a8c0581f40c64408102cbaab49a010fc0ecf3331056\" returns successfully"
	Oct 02 07:11:27 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:27.985440805Z" level=info msg="StartContainer for \"f9684729fd04c6690cc1ecaf57004d4f96913602e28366724a06e69badd69256\" returns successfully"
	Oct 02 07:11:38 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:38.092334552Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.506894658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4f46l,Uid:75f8e6d1-2e11-4681-a96e-b23c7c6c0871,Namespace:kube-system,Attempt:0,}"
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.517066514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-kk8x7,Uid:821404c0-b7d7-4712-af12-b28a7222dc5b,Namespace:kube-system,Attempt:0,}"
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.597747164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4f46l,Uid:75f8e6d1-2e11-4681-a96e-b23c7c6c0871,Namespace:kube-system,Attempt:0,} returns sandbox id \"19d58d3645708b7632d3ba89a4a9e0d11d0c0a0230254a0a125ed0b5c984c75d\""
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.610583732Z" level=info msg="CreateContainer within sandbox \"19d58d3645708b7632d3ba89a4a9e0d11d0c0a0230254a0a125ed0b5c984c75d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.633408561Z" level=info msg="CreateContainer within sandbox \"19d58d3645708b7632d3ba89a4a9e0d11d0c0a0230254a0a125ed0b5c984c75d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6761c2e99ca0ff40f03516492e30e80b39fa3370d809c54a937c6d6a90ce716c\""
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.636465355Z" level=info msg="StartContainer for \"6761c2e99ca0ff40f03516492e30e80b39fa3370d809c54a937c6d6a90ce716c\""
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.700999676Z" level=info msg="StartContainer for \"6761c2e99ca0ff40f03516492e30e80b39fa3370d809c54a937c6d6a90ce716c\" returns successfully"
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.742270761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-kk8x7,Uid:821404c0-b7d7-4712-af12-b28a7222dc5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5c94950b5e3c33ca8692cc9c0f96b031dad03588229f817903585b518b91f5b\""
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.749225621Z" level=info msg="CreateContainer within sandbox \"a5c94950b5e3c33ca8692cc9c0f96b031dad03588229f817903585b518b91f5b\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.794073867Z" level=info msg="CreateContainer within sandbox \"a5c94950b5e3c33ca8692cc9c0f96b031dad03588229f817903585b518b91f5b\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"6ae840d9c287f68091b17dd17c495eb462dbbba6b1dcebebbedd8f0d3283a518\""
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.795761834Z" level=info msg="StartContainer for \"6ae840d9c287f68091b17dd17c495eb462dbbba6b1dcebebbedd8f0d3283a518\""
	Oct 02 07:11:39 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:39.929327851Z" level=info msg="StartContainer for \"6ae840d9c287f68091b17dd17c495eb462dbbba6b1dcebebbedd8f0d3283a518\" returns successfully"
	Oct 02 07:11:50 dockerenv-330767 containerd[754]: time="2025-10-02T07:11:50.315446731Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	
	
	==> describe nodes <==
	Name:               dockerenv-330767
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=dockerenv-330767
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=dockerenv-330767
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_11_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:11:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-330767
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:11:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:11:50 +0000   Thu, 02 Oct 2025 07:11:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:11:50 +0000   Thu, 02 Oct 2025 07:11:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:11:50 +0000   Thu, 02 Oct 2025 07:11:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:11:50 +0000   Thu, 02 Oct 2025 07:11:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-330767
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 6818ecc13b57447b8509455c5aea0f84
	  System UUID:                f65454d0-72d6-43ef-a910-01404bc26d5d
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-dsjb4                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11s
	  kube-system                 etcd-dockerenv-330767                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17s
	  kube-system                 kindnet-kk8x7                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11s
	  kube-system                 kube-apiserver-dockerenv-330767             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-controller-manager-dockerenv-330767    200m (10%)    0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-proxy-4f46l                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-dockerenv-330767             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 10s   kube-proxy       
	  Normal   Starting                 17s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  17s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17s   kubelet          Node dockerenv-330767 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s   kubelet          Node dockerenv-330767 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s   kubelet          Node dockerenv-330767 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12s   node-controller  Node dockerenv-330767 event: Registered Node dockerenv-330767 in Controller
	  Normal   NodeReady                0s    kubelet          Node dockerenv-330767 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [f023f64f06eca0ed279daa862bda723c6080d6f30de200c4ca9760e9a5560b67] <==
	{"level":"warn","ts":"2025-10-02T07:11:29.591230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.629695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.630356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.643422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.664132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.688368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.707565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.721762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.737044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.769095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.792491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.835124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.836294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.852991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.871984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.886001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.923401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.929582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.940225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.961322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:29.979062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:30.012207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:30.063953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:30.078534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:11:30.145063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40980","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:11:50 up  6:54,  0 user,  load average: 1.21, 0.55, 0.85
	Linux dockerenv-330767 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ae840d9c287f68091b17dd17c495eb462dbbba6b1dcebebbedd8f0d3283a518] <==
	I1002 07:11:40.112679       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:11:40.113397       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 07:11:40.113616       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:11:40.113754       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:11:40.113854       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:11:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:11:40.311555       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:11:40.311592       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:11:40.311601       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:11:40.311706       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 07:11:40.612482       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:11:40.612650       1 metrics.go:72] Registering metrics
	I1002 07:11:40.612753       1 controller.go:711] "Syncing nftables rules"
	I1002 07:11:50.314745       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:11:50.314818       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f9684729fd04c6690cc1ecaf57004d4f96913602e28366724a06e69badd69256] <==
	I1002 07:11:31.187973       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1002 07:11:31.207426       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:11:31.234811       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:11:31.236236       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1002 07:11:31.236291       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1002 07:11:31.250308       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:11:31.250636       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1002 07:11:31.448023       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:11:31.882122       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1002 07:11:31.889542       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1002 07:11:31.889567       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:11:32.590906       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:11:32.640206       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:11:32.734527       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1002 07:11:32.742270       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:11:32.743602       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:11:32.748399       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:11:33.070864       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1002 07:11:33.685991       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:11:33.718259       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1002 07:11:33.735726       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:11:38.975883       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:11:38.981415       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:11:39.121770       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:11:39.170144       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c2252a6cdd65c99d17b04cd0d4b20d1a00dca5ea900dd6054f5bbf550d90ec3c] <==
	I1002 07:11:38.084743       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:11:38.085913       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:11:38.096326       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1002 07:11:38.100802       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:11:38.106930       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:11:38.111539       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 07:11:38.114557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1002 07:11:38.114725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1002 07:11:38.114851       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1002 07:11:38.115042       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 07:11:38.115374       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:11:38.115513       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1002 07:11:38.115620       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1002 07:11:38.115713       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:11:38.116058       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1002 07:11:38.116402       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1002 07:11:38.116633       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1002 07:11:38.117175       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 07:11:38.118204       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1002 07:11:38.118421       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1002 07:11:38.121178       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1002 07:11:38.121498       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:11:38.123570       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:11:38.123814       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:11:38.123919       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	
	
	==> kube-proxy [6761c2e99ca0ff40f03516492e30e80b39fa3370d809c54a937c6d6a90ce716c] <==
	I1002 07:11:39.755989       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:11:39.850501       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:11:39.951590       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:11:39.951625       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:11:39.951690       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:11:39.973296       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:11:39.973368       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:11:39.977336       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:11:39.977840       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:11:39.977869       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:11:39.981183       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:11:39.981263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:11:39.981586       1 config.go:200] "Starting service config controller"
	I1002 07:11:39.981640       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:11:39.982057       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:11:39.982134       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:11:39.982753       1 config.go:309] "Starting node config controller"
	I1002 07:11:39.982820       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:11:39.982848       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:11:40.081858       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:11:40.082064       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:11:40.082458       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [84a921f04c54a65f99de1a8c0581f40c64408102cbaab49a010fc0ecf3331056] <==
	I1002 07:11:31.808657       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:11:31.808874       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:11:31.809315       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1002 07:11:31.809497       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1002 07:11:31.817984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:11:31.820633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:11:31.822340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:11:31.822533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:11:31.822710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:11:31.822891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:11:31.823048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:11:31.823202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:11:31.823399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:11:31.823673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:11:31.823730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:11:31.823777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:11:31.823822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:11:31.823894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:11:31.823929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:11:31.824163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:11:31.824276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:11:31.824321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:11:31.824372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:11:32.630599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1002 07:11:35.609637       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 02 07:11:34 dockerenv-330767 kubelet[1449]: I1002 07:11:34.630121    1449 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 02 07:11:34 dockerenv-330767 kubelet[1449]: I1002 07:11:34.750270    1449 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-dockerenv-330767"
	Oct 02 07:11:34 dockerenv-330767 kubelet[1449]: E1002 07:11:34.772918    1449 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-dockerenv-330767\" already exists" pod="kube-system/kube-apiserver-dockerenv-330767"
	Oct 02 07:11:34 dockerenv-330767 kubelet[1449]: I1002 07:11:34.799500    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-330767" podStartSLOduration=1.7994768749999999 podStartE2EDuration="1.799476875s" podCreationTimestamp="2025-10-02 07:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:11:34.786082705 +0000 UTC m=+1.257924623" watchObservedRunningTime="2025-10-02 07:11:34.799476875 +0000 UTC m=+1.271318776"
	Oct 02 07:11:34 dockerenv-330767 kubelet[1449]: I1002 07:11:34.818306    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-330767" podStartSLOduration=1.818287063 podStartE2EDuration="1.818287063s" podCreationTimestamp="2025-10-02 07:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:11:34.802295676 +0000 UTC m=+1.274137602" watchObservedRunningTime="2025-10-02 07:11:34.818287063 +0000 UTC m=+1.290128973"
	Oct 02 07:11:34 dockerenv-330767 kubelet[1449]: I1002 07:11:34.845274    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-dockerenv-330767" podStartSLOduration=2.844744332 podStartE2EDuration="2.844744332s" podCreationTimestamp="2025-10-02 07:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:11:34.818627643 +0000 UTC m=+1.290469561" watchObservedRunningTime="2025-10-02 07:11:34.844744332 +0000 UTC m=+1.316586234"
	Oct 02 07:11:34 dockerenv-330767 kubelet[1449]: I1002 07:11:34.933662    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-330767" podStartSLOduration=1.933644753 podStartE2EDuration="1.933644753s" podCreationTimestamp="2025-10-02 07:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:11:34.845621797 +0000 UTC m=+1.317463698" watchObservedRunningTime="2025-10-02 07:11:34.933644753 +0000 UTC m=+1.405486654"
	Oct 02 07:11:38 dockerenv-330767 kubelet[1449]: I1002 07:11:38.091894    1449 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 02 07:11:38 dockerenv-330767 kubelet[1449]: I1002 07:11:38.092959    1449 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280675    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75f8e6d1-2e11-4681-a96e-b23c7c6c0871-xtables-lock\") pod \"kube-proxy-4f46l\" (UID: \"75f8e6d1-2e11-4681-a96e-b23c7c6c0871\") " pod="kube-system/kube-proxy-4f46l"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280734    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxwc\" (UniqueName: \"kubernetes.io/projected/75f8e6d1-2e11-4681-a96e-b23c7c6c0871-kube-api-access-cmxwc\") pod \"kube-proxy-4f46l\" (UID: \"75f8e6d1-2e11-4681-a96e-b23c7c6c0871\") " pod="kube-system/kube-proxy-4f46l"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280782    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75f8e6d1-2e11-4681-a96e-b23c7c6c0871-kube-proxy\") pod \"kube-proxy-4f46l\" (UID: \"75f8e6d1-2e11-4681-a96e-b23c7c6c0871\") " pod="kube-system/kube-proxy-4f46l"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280802    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/821404c0-b7d7-4712-af12-b28a7222dc5b-cni-cfg\") pod \"kindnet-kk8x7\" (UID: \"821404c0-b7d7-4712-af12-b28a7222dc5b\") " pod="kube-system/kindnet-kk8x7"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280828    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/821404c0-b7d7-4712-af12-b28a7222dc5b-lib-modules\") pod \"kindnet-kk8x7\" (UID: \"821404c0-b7d7-4712-af12-b28a7222dc5b\") " pod="kube-system/kindnet-kk8x7"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280848    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9msz\" (UniqueName: \"kubernetes.io/projected/821404c0-b7d7-4712-af12-b28a7222dc5b-kube-api-access-x9msz\") pod \"kindnet-kk8x7\" (UID: \"821404c0-b7d7-4712-af12-b28a7222dc5b\") " pod="kube-system/kindnet-kk8x7"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280867    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75f8e6d1-2e11-4681-a96e-b23c7c6c0871-lib-modules\") pod \"kube-proxy-4f46l\" (UID: \"75f8e6d1-2e11-4681-a96e-b23c7c6c0871\") " pod="kube-system/kube-proxy-4f46l"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.280890    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/821404c0-b7d7-4712-af12-b28a7222dc5b-xtables-lock\") pod \"kindnet-kk8x7\" (UID: \"821404c0-b7d7-4712-af12-b28a7222dc5b\") " pod="kube-system/kindnet-kk8x7"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.394746    1449 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 02 07:11:39 dockerenv-330767 kubelet[1449]: I1002 07:11:39.780924    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4f46l" podStartSLOduration=0.780905884 podStartE2EDuration="780.905884ms" podCreationTimestamp="2025-10-02 07:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:11:39.780554695 +0000 UTC m=+6.252396605" watchObservedRunningTime="2025-10-02 07:11:39.780905884 +0000 UTC m=+6.252747786"
	Oct 02 07:11:40 dockerenv-330767 kubelet[1449]: I1002 07:11:40.785580    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kk8x7" podStartSLOduration=1.78556326 podStartE2EDuration="1.78556326s" podCreationTimestamp="2025-10-02 07:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-02 07:11:40.785314666 +0000 UTC m=+7.257156609" watchObservedRunningTime="2025-10-02 07:11:40.78556326 +0000 UTC m=+7.257405170"
	Oct 02 07:11:50 dockerenv-330767 kubelet[1449]: I1002 07:11:50.376829    1449 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 02 07:11:50 dockerenv-330767 kubelet[1449]: I1002 07:11:50.572704    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdxrn\" (UniqueName: \"kubernetes.io/projected/3b235aaf-a209-4cf4-927b-9d4f5e5533ac-kube-api-access-wdxrn\") pod \"coredns-66bc5c9577-dsjb4\" (UID: \"3b235aaf-a209-4cf4-927b-9d4f5e5533ac\") " pod="kube-system/coredns-66bc5c9577-dsjb4"
	Oct 02 07:11:50 dockerenv-330767 kubelet[1449]: I1002 07:11:50.572889    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d117beb-3364-4d01-abb3-ede042d4f278-tmp\") pod \"storage-provisioner\" (UID: \"5d117beb-3364-4d01-abb3-ede042d4f278\") " pod="kube-system/storage-provisioner"
	Oct 02 07:11:50 dockerenv-330767 kubelet[1449]: I1002 07:11:50.572918    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2gh\" (UniqueName: \"kubernetes.io/projected/5d117beb-3364-4d01-abb3-ede042d4f278-kube-api-access-7s2gh\") pod \"storage-provisioner\" (UID: \"5d117beb-3364-4d01-abb3-ede042d4f278\") " pod="kube-system/storage-provisioner"
	Oct 02 07:11:50 dockerenv-330767 kubelet[1449]: I1002 07:11:50.572936    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b235aaf-a209-4cf4-927b-9d4f5e5533ac-config-volume\") pod \"coredns-66bc5c9577-dsjb4\" (UID: \"3b235aaf-a209-4cf4-927b-9d4f5e5533ac\") " pod="kube-system/coredns-66bc5c9577-dsjb4"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p dockerenv-330767 -n dockerenv-330767
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-330767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-dsjb4 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context dockerenv-330767 describe pod coredns-66bc5c9577-dsjb4 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context dockerenv-330767 describe pod coredns-66bc5c9577-dsjb4 storage-provisioner: exit status 1 (98.492234ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-dsjb4" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context dockerenv-330767 describe pod coredns-66bc5c9577-dsjb4 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-330767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-330767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-330767: (2.270680117s)
--- FAIL: TestDockerEnvContainerd (48.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-630775 --alsologtostderr -v=1]
E1002 07:28:24.880625  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-630775 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-630775 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-630775 --alsologtostderr -v=1] stderr:
I1002 07:25:05.433731  870696 out.go:360] Setting OutFile to fd 1 ...
I1002 07:25:05.435155  870696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:25:05.435196  870696 out.go:374] Setting ErrFile to fd 2...
I1002 07:25:05.435216  870696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:25:05.435539  870696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
I1002 07:25:05.435866  870696 mustload.go:65] Loading cluster: functional-630775
I1002 07:25:05.436315  870696 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:25:05.436869  870696 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
I1002 07:25:05.455417  870696 host.go:66] Checking if "functional-630775" exists ...
I1002 07:25:05.455751  870696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 07:25:05.517713  870696 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:25:05.508653762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1002 07:25:05.517865  870696 api_server.go:166] Checking apiserver status ...
I1002 07:25:05.517927  870696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 07:25:05.517970  870696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
I1002 07:25:05.535460  870696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
I1002 07:25:05.653905  870696 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4856/cgroup
I1002 07:25:05.662124  870696 api_server.go:182] apiserver freezer: "12:freezer:/docker/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/kubepods/burstable/pod28fb5a9da29740980ef7dee69e1e987d/6ad64793002dd63e29f2e6d0c903589a03b0c6e995ae310ae36b85d4ee81c65b"
I1002 07:25:05.662238  870696 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/kubepods/burstable/pod28fb5a9da29740980ef7dee69e1e987d/6ad64793002dd63e29f2e6d0c903589a03b0c6e995ae310ae36b85d4ee81c65b/freezer.state
I1002 07:25:05.669758  870696 api_server.go:204] freezer state: "THAWED"
I1002 07:25:05.669816  870696 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1002 07:25:05.679159  870696 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1002 07:25:05.679227  870696 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1002 07:25:05.679414  870696 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:25:05.679429  870696 addons.go:69] Setting dashboard=true in profile "functional-630775"
I1002 07:25:05.679437  870696 addons.go:238] Setting addon dashboard=true in "functional-630775"
I1002 07:25:05.679466  870696 host.go:66] Checking if "functional-630775" exists ...
I1002 07:25:05.679869  870696 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
I1002 07:25:05.699527  870696 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1002 07:25:05.702349  870696 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1002 07:25:05.705157  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1002 07:25:05.705197  870696 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1002 07:25:05.705273  870696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
I1002 07:25:05.722668  870696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
I1002 07:25:05.821819  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1002 07:25:05.821865  870696 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1002 07:25:05.834619  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1002 07:25:05.834644  870696 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1002 07:25:05.847502  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1002 07:25:05.847524  870696 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1002 07:25:05.859906  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1002 07:25:05.859935  870696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1002 07:25:05.872659  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1002 07:25:05.872681  870696 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1002 07:25:05.886129  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1002 07:25:05.886152  870696 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1002 07:25:05.900043  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1002 07:25:05.900066  870696 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1002 07:25:05.912940  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1002 07:25:05.912969  870696 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1002 07:25:05.925986  870696 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1002 07:25:05.926031  870696 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1002 07:25:05.939716  870696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1002 07:25:06.683304  870696 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-630775 addons enable metrics-server

                                                
                                                
I1002 07:25:06.687947  870696 addons.go:201] Writing out "functional-630775" config to set dashboard=true...
W1002 07:25:06.688243  870696 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1002 07:25:06.688968  870696 kapi.go:59] client config for functional-630775: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt", KeyFile:"/home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.key", CAFile:"/home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1002 07:25:06.689507  870696 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1002 07:25:06.689528  870696 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1002 07:25:06.689535  870696 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1002 07:25:06.689540  870696 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1002 07:25:06.689552  870696 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1002 07:25:06.705469  870696 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  a77d28b7-29e7-43ea-a33c-53837c148d11 1467 0 2025-10-02 07:25:06 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-02 07:25:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.213.6,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.213.6],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1002 07:25:06.705628  870696 out.go:285] * Launching proxy ...
* Launching proxy ...
I1002 07:25:06.705700  870696 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-630775 proxy --port 36195]
I1002 07:25:06.705964  870696 dashboard.go:157] Waiting for kubectl to output host:port ...
I1002 07:25:06.764339  870696 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1002 07:25:06.764394  870696 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1002 07:25:06.781396  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1edc2d3e-82f0-4ef1-b313-2e3f4be53b7d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab900 TLS:<nil>}
I1002 07:25:06.781498  870696 retry.go:31] will retry after 81.581µs: Temporary Error: unexpected response code: 503
I1002 07:25:06.786023  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4a760d61-ede5-4fc9-8db7-9998a33a6b1e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aba40 TLS:<nil>}
I1002 07:25:06.786084  870696 retry.go:31] will retry after 159.456µs: Temporary Error: unexpected response code: 503
I1002 07:25:06.789910  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9b2e75c-6ce9-438d-90a7-e82bfb9c2359] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abb80 TLS:<nil>}
I1002 07:25:06.789969  870696 retry.go:31] will retry after 336.086µs: Temporary Error: unexpected response code: 503
I1002 07:25:06.793556  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a6e169b2-cb26-400b-92ab-19e47455b321] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abcc0 TLS:<nil>}
I1002 07:25:06.793606  870696 retry.go:31] will retry after 438.406µs: Temporary Error: unexpected response code: 503
I1002 07:25:06.797523  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3ddd5d7b-ea57-400f-b0ee-c77b2ff03c80] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c23c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abe00 TLS:<nil>}
I1002 07:25:06.797571  870696 retry.go:31] will retry after 684.879µs: Temporary Error: unexpected response code: 503
I1002 07:25:06.801155  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[03fcc966-3bc7-4606-b7cf-f3affff702a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e000 TLS:<nil>}
I1002 07:25:06.801232  870696 retry.go:31] will retry after 574.479µs: Temporary Error: unexpected response code: 503
I1002 07:25:06.804954  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b2d6286e-74a0-437c-8500-6f420ea83c0a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x400064e700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001ed400 TLS:<nil>}
I1002 07:25:06.805001  870696 retry.go:31] will retry after 1.281315ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.809839  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[47ec3ca6-554b-49ef-a875-a46b64dbbe6d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x400064e780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001ed680 TLS:<nil>}
I1002 07:25:06.809886  870696 retry.go:31] will retry after 1.115576ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.814668  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c036f8f2-0497-4942-af58-ba1a7e4a3b7b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x400064e840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001ed7c0 TLS:<nil>}
I1002 07:25:06.814713  870696 retry.go:31] will retry after 2.598259ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.821249  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04f43302-8e5d-4e07-8ed8-818148c1486d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x400064e900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001ed900 TLS:<nil>}
I1002 07:25:06.821302  870696 retry.go:31] will retry after 5.165959ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.834362  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8b6ad10f-fdfa-406d-811c-7242b3407ba1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x400064e980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001eda40 TLS:<nil>}
I1002 07:25:06.834488  870696 retry.go:31] will retry after 8.346685ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.846669  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92a24f66-94a0-494b-a30a-f39ee822feaf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e140 TLS:<nil>}
I1002 07:25:06.846732  870696 retry.go:31] will retry after 11.3776ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.861666  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[04e57f5c-9ee2-476c-9c7d-ae3acbd37404] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e280 TLS:<nil>}
I1002 07:25:06.861740  870696 retry.go:31] will retry after 14.417328ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.880138  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[195958bd-5a94-4846-8f24-9ccc0d0c251a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e3c0 TLS:<nil>}
I1002 07:25:06.880213  870696 retry.go:31] will retry after 24.849517ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.908512  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7969df32-650c-455a-9a6c-d512dc6682f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e500 TLS:<nil>}
I1002 07:25:06.908578  870696 retry.go:31] will retry after 14.668436ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.926798  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d62c4589-798f-4845-910b-7c825d073a04] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e640 TLS:<nil>}
I1002 07:25:06.926862  870696 retry.go:31] will retry after 51.596155ms: Temporary Error: unexpected response code: 503
I1002 07:25:06.981939  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e6c83298-70a0-4844-b4fe-25b0f1f2688f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:06 GMT]] Body:0x40007c2bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e780 TLS:<nil>}
I1002 07:25:06.982020  870696 retry.go:31] will retry after 74.681489ms: Temporary Error: unexpected response code: 503
I1002 07:25:07.060201  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dad6a66d-7e3a-4af8-b4e8-896c6c3e78cf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:07 GMT]] Body:0x40007c2c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026e8c0 TLS:<nil>}
I1002 07:25:07.060283  870696 retry.go:31] will retry after 115.957316ms: Temporary Error: unexpected response code: 503
I1002 07:25:07.179422  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2c764b5a-191d-429a-8c34-1ada09a78de3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:07 GMT]] Body:0x40007c2d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026ea00 TLS:<nil>}
I1002 07:25:07.179484  870696 retry.go:31] will retry after 121.884339ms: Temporary Error: unexpected response code: 503
I1002 07:25:07.304921  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[52fd9f23-a257-4b82-9dce-0dd09ec5ee19] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:07 GMT]] Body:0x400064eec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001edb80 TLS:<nil>}
I1002 07:25:07.305011  870696 retry.go:31] will retry after 242.667693ms: Temporary Error: unexpected response code: 503
I1002 07:25:07.551546  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a5438dd-f149-439b-9e39-25b8d368d1c5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:07 GMT]] Body:0x40007c2fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026eb40 TLS:<nil>}
I1002 07:25:07.551616  870696 retry.go:31] will retry after 183.074123ms: Temporary Error: unexpected response code: 503
I1002 07:25:07.737831  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[45a8961b-b54b-4e6e-8bef-cc985935070c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:07 GMT]] Body:0x400064ef80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001edcc0 TLS:<nil>}
I1002 07:25:07.737905  870696 retry.go:31] will retry after 509.169837ms: Temporary Error: unexpected response code: 503
I1002 07:25:08.256102  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[76d38056-6708-4f96-a7eb-59436946c4b6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:08 GMT]] Body:0x40007c30c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026ec80 TLS:<nil>}
I1002 07:25:08.256183  870696 retry.go:31] will retry after 1.10650004s: Temporary Error: unexpected response code: 503
I1002 07:25:09.366312  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c6e571f-16bc-4e80-a7b9-50bbb7550f99] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:09 GMT]] Body:0x40007c3280 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001ede00 TLS:<nil>}
I1002 07:25:09.366378  870696 retry.go:31] will retry after 981.088964ms: Temporary Error: unexpected response code: 503
I1002 07:25:10.350568  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c1e178e-5a18-46f3-a72f-556fcf535e32] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:10 GMT]] Body:0x400064f1c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40016d6000 TLS:<nil>}
I1002 07:25:10.350630  870696 retry.go:31] will retry after 1.762745452s: Temporary Error: unexpected response code: 503
I1002 07:25:12.117423  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[10f83295-fcfc-40a5-b0ce-0ad849f251a9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:12 GMT]] Body:0x400064fb00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026edc0 TLS:<nil>}
I1002 07:25:12.117497  870696 retry.go:31] will retry after 1.469106911s: Temporary Error: unexpected response code: 503
I1002 07:25:13.590148  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ec2ffd7-11fb-4dd8-ae14-46091e6b907e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:13 GMT]] Body:0x400064fbc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026ef00 TLS:<nil>}
I1002 07:25:13.590210  870696 retry.go:31] will retry after 4.137953632s: Temporary Error: unexpected response code: 503
I1002 07:25:17.731048  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[26d8db9d-7d67-4c73-b686-2ac94b163eb8] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:17 GMT]] Body:0x40007c3440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f040 TLS:<nil>}
I1002 07:25:17.731110  870696 retry.go:31] will retry after 5.650023415s: Temporary Error: unexpected response code: 503
I1002 07:25:23.386810  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a6dfbca-58c6-4ddf-8f04-9c35f9530ca9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:23 GMT]] Body:0x40007c35c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40016d6140 TLS:<nil>}
I1002 07:25:23.386873  870696 retry.go:31] will retry after 5.403497699s: Temporary Error: unexpected response code: 503
I1002 07:25:28.794629  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af2a41e2-7e8a-4f46-a246-df5898befa60] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:28 GMT]] Body:0x40007c3640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f180 TLS:<nil>}
I1002 07:25:28.794686  870696 retry.go:31] will retry after 14.833705989s: Temporary Error: unexpected response code: 503
I1002 07:25:43.631639  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f4aab629-0ff7-4db3-8a37-e1e2c0894ae4] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:25:43 GMT]] Body:0x40007c3780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40016d6280 TLS:<nil>}
I1002 07:25:43.631702  870696 retry.go:31] will retry after 17.531831788s: Temporary Error: unexpected response code: 503
I1002 07:26:01.167564  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[12c79331-3f31-49b8-94d9-850061a1e96a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:26:01 GMT]] Body:0x400171e040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40016d63c0 TLS:<nil>}
I1002 07:26:01.167636  870696 retry.go:31] will retry after 41.723692685s: Temporary Error: unexpected response code: 503
I1002 07:26:42.896574  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[71391646-1f03-4f1d-825c-bfad32362cf5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:26:42 GMT]] Body:0x400171e100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f2c0 TLS:<nil>}
I1002 07:26:42.896640  870696 retry.go:31] will retry after 38.667720488s: Temporary Error: unexpected response code: 503
I1002 07:27:21.567529  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad339c29-208e-4aff-9428-31fa2a98caee] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:27:21 GMT]] Body:0x40008d2340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400026f400 TLS:<nil>}
I1002 07:27:21.567600  870696 retry.go:31] will retry after 47.524026379s: Temporary Error: unexpected response code: 503
I1002 07:28:09.096560  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d6f683e-dec7-496c-bd98-2c2857f67123] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:28:09 GMT]] Body:0x40007c21c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40016d6500 TLS:<nil>}
I1002 07:28:09.096625  870696 retry.go:31] will retry after 1m13.581374165s: Temporary Error: unexpected response code: 503
I1002 07:29:22.682555  870696 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3f3a22fc-55da-418a-826f-827f737cd501] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Thu, 02 Oct 2025 07:29:22 GMT]] Body:0x40007c2100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40016d6640 TLS:<nil>}
I1002 07:29:22.682644  870696 retry.go:31] will retry after 1m9.857440604s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-630775
helpers_test.go:243: (dbg) docker inspect functional-630775:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65",
	        "Created": "2025-10-02T07:12:42.807200683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 857886,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:12:42.868034081Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/hostname",
	        "HostsPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/hosts",
	        "LogPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65-json.log",
	        "Name": "/functional-630775",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-630775:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-630775",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65",
	                "LowerDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-630775",
	                "Source": "/var/lib/docker/volumes/functional-630775/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-630775",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-630775",
	                "name.minikube.sigs.k8s.io": "functional-630775",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b61e4392fa5332ed827648e648c730efdd836e49a062819e890c14e7af22069",
	            "SandboxKey": "/var/run/docker/netns/7b61e4392fa5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-630775": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:e5:ea:59:a6:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "53c71bd34c60a004896ad1741793966c0aa2c75408be79d9661dcac532bd3113",
	                    "EndpointID": "113beb41032ff7995405d2b7630ce3ad773082757f0e8de5d718a56a03503484",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-630775",
	                        "59dc05e609c7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-630775 -n functional-630775
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 logs -n 25: (1.445668784s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-630775 ssh findmnt -T /mount1                                                                        │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh            │ functional-630775 ssh findmnt -T /mount2                                                                        │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh            │ functional-630775 ssh findmnt -T /mount3                                                                        │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ mount          │ -p functional-630775 --kill=true                                                                                │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ addons         │ functional-630775 addons list                                                                                   │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ addons         │ functional-630775 addons list -o json                                                                           │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ start          │ -p functional-630775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:25 UTC │                     │
	│ start          │ -p functional-630775 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd           │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:25 UTC │                     │
	│ start          │ -p functional-630775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:25 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-630775 --alsologtostderr -v=1                                                  │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:25 UTC │                     │
	│ service        │ functional-630775 service list                                                                                  │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:28 UTC │ 02 Oct 25 07:28 UTC │
	│ service        │ functional-630775 service list -o json                                                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:28 UTC │ 02 Oct 25 07:29 UTC │
	│ service        │ functional-630775 service --namespace=default --https --url hello-node                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │                     │
	│ service        │ functional-630775 service hello-node --url --format={{.IP}}                                                     │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │                     │
	│ service        │ functional-630775 service hello-node --url                                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │                     │
	│ image          │ functional-630775 image ls --format short --alsologtostderr                                                     │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ image          │ functional-630775 image ls --format yaml --alsologtostderr                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ ssh            │ functional-630775 ssh pgrep buildkitd                                                                           │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │                     │
	│ image          │ functional-630775 image build -t localhost/my-image:functional-630775 testdata/build --alsologtostderr          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ image          │ functional-630775 image ls                                                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ image          │ functional-630775 image ls --format json --alsologtostderr                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ image          │ functional-630775 image ls --format table --alsologtostderr                                                     │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ update-context │ functional-630775 update-context --alsologtostderr -v=2                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ update-context │ functional-630775 update-context --alsologtostderr -v=2                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	│ update-context │ functional-630775 update-context --alsologtostderr -v=2                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:29 UTC │ 02 Oct 25 07:29 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:25:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:25:05.219704  870650 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:25:05.219894  870650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:25:05.219907  870650 out.go:374] Setting ErrFile to fd 2...
	I1002 07:25:05.219912  870650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:25:05.220899  870650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:25:05.221332  870650 out.go:368] Setting JSON to false
	I1002 07:25:05.222297  870650 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25655,"bootTime":1759364251,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 07:25:05.222375  870650 start.go:140] virtualization:  
	I1002 07:25:05.225685  870650 out.go:179] * [functional-630775] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 07:25:05.228689  870650 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:25:05.228815  870650 notify.go:220] Checking for updates...
	I1002 07:25:05.235440  870650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:25:05.238351  870650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:25:05.241217  870650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 07:25:05.244100  870650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:25:05.246946  870650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:25:05.250398  870650 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:25:05.251012  870650 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:25:05.292881  870650 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:25:05.293038  870650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:25:05.358864  870650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:25:05.349438375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:25:05.358971  870650 docker.go:318] overlay module found
	I1002 07:25:05.364136  870650 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 07:25:05.367053  870650 start.go:304] selected driver: docker
	I1002 07:25:05.367074  870650 start.go:924] validating driver "docker" against &{Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:25:05.367280  870650 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:25:05.370873  870650 out.go:203] 
	W1002 07:25:05.373868  870650 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 07:25:05.376640  870650 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f717db2706050       1611cd07b61d5       15 minutes ago      Exited              mount-munger              0                   a0764ff8470de       busybox-mount                               default
	9c9c69562b8f2       35f3cbee4fb77       15 minutes ago      Running             nginx                     0                   e8dc22f25086e       nginx-svc                                   default
	6ad64793002dd       43911e833d64d       15 minutes ago      Running             kube-apiserver            0                   3762e74a92c10       kube-apiserver-functional-630775            kube-system
	178c96249f9c1       7eb2c6ff0c5a7       15 minutes ago      Running             kube-controller-manager   2                   7c212a34d95f5       kube-controller-manager-functional-630775   kube-system
	602bfadd7763e       a1894772a478e       16 minutes ago      Running             etcd                      1                   b2957b80a9860       etcd-functional-630775                      kube-system
	004330dfe2dd2       b5f57ec6b9867       16 minutes ago      Running             kube-scheduler            1                   29714899aa300       kube-scheduler-functional-630775            kube-system
	8ba344dad1233       7eb2c6ff0c5a7       16 minutes ago      Exited              kube-controller-manager   1                   7c212a34d95f5       kube-controller-manager-functional-630775   kube-system
	49a7d1907be92       ba04bb24b9575       16 minutes ago      Running             storage-provisioner       1                   300b99d061daa       storage-provisioner                         kube-system
	89b98ca1639a4       05baa95f5142d       16 minutes ago      Running             kube-proxy                1                   ad703d64a5ee8       kube-proxy-9nzx4                            kube-system
	8dcd88165ca08       b1a8c6f707935       16 minutes ago      Running             kindnet-cni               1                   0b1223812eb7c       kindnet-q2985                               kube-system
	9d5e48870ba26       138784d87c9c5       16 minutes ago      Running             coredns                   1                   059f411532ccb       coredns-66bc5c9577-prnlg                    kube-system
	3fbb4a2c8d8bc       138784d87c9c5       16 minutes ago      Exited              coredns                   0                   059f411532ccb       coredns-66bc5c9577-prnlg                    kube-system
	f53adf4e4cfba       ba04bb24b9575       16 minutes ago      Exited              storage-provisioner       0                   300b99d061daa       storage-provisioner                         kube-system
	c55ee29bbc732       b1a8c6f707935       16 minutes ago      Exited              kindnet-cni               0                   0b1223812eb7c       kindnet-q2985                               kube-system
	8ca7e16833e24       05baa95f5142d       16 minutes ago      Exited              kube-proxy                0                   ad703d64a5ee8       kube-proxy-9nzx4                            kube-system
	03ee97847e6c7       b5f57ec6b9867       17 minutes ago      Exited              kube-scheduler            0                   29714899aa300       kube-scheduler-functional-630775            kube-system
	1bd4dd24c2653       a1894772a478e       17 minutes ago      Exited              etcd                      0                   b2957b80a9860       etcd-functional-630775                      kube-system
	
	
	==> containerd <==
	Oct 02 07:26:40 functional-630775 containerd[3593]: time="2025-10-02T07:26:40.583575411Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:26:40 functional-630775 containerd[3593]: time="2025-10-02T07:26:40.736389976Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:26:41 functional-630775 containerd[3593]: time="2025-10-02T07:26:41.024497862Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:26:41 functional-630775 containerd[3593]: time="2025-10-02T07:26:41.024560736Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 02 07:28:00 functional-630775 containerd[3593]: time="2025-10-02T07:28:00.580439602Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 02 07:28:00 functional-630775 containerd[3593]: time="2025-10-02T07:28:00.582939524Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:28:00 functional-630775 containerd[3593]: time="2025-10-02T07:28:00.731616600Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:28:01 functional-630775 containerd[3593]: time="2025-10-02T07:28:01.111653402Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:28:01 functional-630775 containerd[3593]: time="2025-10-02T07:28:01.111701745Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=12709"
	Oct 02 07:28:09 functional-630775 containerd[3593]: time="2025-10-02T07:28:09.579421480Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 02 07:28:09 functional-630775 containerd[3593]: time="2025-10-02T07:28:09.581794316Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:28:09 functional-630775 containerd[3593]: time="2025-10-02T07:28:09.716523963Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:28:09 functional-630775 containerd[3593]: time="2025-10-02T07:28:09.993445437Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:28:09 functional-630775 containerd[3593]: time="2025-10-02T07:28:09.994288870Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 02 07:29:09 functional-630775 containerd[3593]: time="2025-10-02T07:29:09.411855632Z" level=info msg="shim disconnected" id=wyhgwhxfeqtkfa3t7n4958e8d namespace=k8s.io
	Oct 02 07:29:09 functional-630775 containerd[3593]: time="2025-10-02T07:29:09.411900866Z" level=warning msg="cleaning up after shim disconnected" id=wyhgwhxfeqtkfa3t7n4958e8d namespace=k8s.io
	Oct 02 07:29:09 functional-630775 containerd[3593]: time="2025-10-02T07:29:09.411938157Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 02 07:29:09 functional-630775 containerd[3593]: time="2025-10-02T07:29:09.663398064Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-630775\""
	Oct 02 07:29:09 functional-630775 containerd[3593]: time="2025-10-02T07:29:09.669411025Z" level=info msg="ImageCreate event name:\"sha256:a4c49d7d972be5ec8254294c727243973d60fc94d1591fd6ac031ddf70ee586c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 02 07:29:09 functional-630775 containerd[3593]: time="2025-10-02T07:29:09.669844589Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-630775\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Oct 02 07:29:43 functional-630775 containerd[3593]: time="2025-10-02T07:29:43.579100983Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:29:43 functional-630775 containerd[3593]: time="2025-10-02T07:29:43.582022579Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:29:43 functional-630775 containerd[3593]: time="2025-10-02T07:29:43.719267851Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:29:44 functional-630775 containerd[3593]: time="2025-10-02T07:29:44.123395998Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:29:44 functional-630775 containerd[3593]: time="2025-10-02T07:29:44.123432559Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=11740"
	
	
	==> coredns [3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38638 - 28631 "HINFO IN 8890447447211847089.1590523317042042169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012992148s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d5e48870ba26acf37929a5697515a9c28c95aa154630492e8a65ff7db1cbe96] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56466 - 63142 "HINFO IN 2250469666875045467.358806669876498839. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016943821s
	
	
	==> describe nodes <==
	Name:               functional-630775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-630775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-630775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_13_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:13:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-630775
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:29:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:29:15 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:29:15 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:29:15 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:29:15 +0000   Thu, 02 Oct 2025 07:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-630775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cadd65090dff457dbb73450103633ff2
	  System UUID:                6a9d513c-1640-40d4-8a86-98c871c3750d
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sd479                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-7d85dfc575-xzj2s           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-66bc5c9577-prnlg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-functional-630775                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-q2985                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-functional-630775              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-630775     200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-9nzx4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-630775              100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-hmtnk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-m5b2d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 16m                kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                node-controller  Node functional-630775 event: Registered Node functional-630775 in Controller
	  Normal   NodeReady                16m                kubelet          Node functional-630775 status is now: NodeReady
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node functional-630775 event: Registered Node functional-630775 in Controller
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd] <==
	{"level":"warn","ts":"2025-10-02T07:13:02.525498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.545863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.572157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.598265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.615644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.641510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.739888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36202","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:13:52.114867Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:13:52.114936Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-630775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T07:13:52.115053Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:59.121770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:59.123526Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.123603Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T07:13:59.123856Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T07:13:59.123880Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124602Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124651Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:59.124661Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124700Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:59.124727Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.127456Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T07:13:59.127532Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.127611Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T07:13:59.127622Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-630775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [602bfadd7763eb054613d766eab0c38eff37bc1a71150682c8892da1032e031a] <==
	{"level":"warn","ts":"2025-10-02T07:14:16.567822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.580883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.596396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.612225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.626910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.645890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.661313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.676336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.692121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.707254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.724915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.741439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.757272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.777617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.788071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.820394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.836544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.851858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.929953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:24:15.884828Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1066}
	{"level":"info","ts":"2025-10-02T07:24:15.908169Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1066,"took":"23.017939ms","hash":2107747004,"current-db-size-bytes":3158016,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1327104,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-02T07:24:15.908227Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2107747004,"revision":1066,"compact-revision":-1}
	{"level":"info","ts":"2025-10-02T07:29:15.891148Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1373}
	{"level":"info","ts":"2025-10-02T07:29:15.895008Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1373,"took":"3.333188ms","hash":1632448622,"current-db-size-bytes":3158016,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":2240512,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-10-02T07:29:15.895058Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1632448622,"revision":1373,"compact-revision":1066}
	
	
	==> kernel <==
	 07:30:06 up  7:12,  0 user,  load average: 0.22, 0.36, 0.60
	Linux functional-630775 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8dcd88165ca08da5e62e74301de3c24c91e43ad60914ede11e5bfc04c0dcfff6] <==
	I1002 07:28:03.242917       1 main.go:301] handling current node
	I1002 07:28:13.241615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:28:13.241660       1 main.go:301] handling current node
	I1002 07:28:23.242264       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:28:23.242298       1 main.go:301] handling current node
	I1002 07:28:33.242265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:28:33.242325       1 main.go:301] handling current node
	I1002 07:28:43.241575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:28:43.241612       1 main.go:301] handling current node
	I1002 07:28:53.241281       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:28:53.241370       1 main.go:301] handling current node
	I1002 07:29:03.248871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:29:03.248907       1 main.go:301] handling current node
	I1002 07:29:13.249555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:29:13.249662       1 main.go:301] handling current node
	I1002 07:29:23.248959       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:29:23.248994       1 main.go:301] handling current node
	I1002 07:29:33.249151       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:29:33.249189       1 main.go:301] handling current node
	I1002 07:29:43.242316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:29:43.242351       1 main.go:301] handling current node
	I1002 07:29:53.241924       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:29:53.241967       1 main.go:301] handling current node
	I1002 07:30:03.245234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:30:03.245506       1 main.go:301] handling current node
	
	
	==> kindnet [c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c] <==
	I1002 07:13:12.711039       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:13:12.711295       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 07:13:12.711458       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:13:12.711478       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:13:12.711488       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:13:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:13:12.915994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:13:12.916209       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:13:12.916310       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:13:12.919193       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 07:13:13.119841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:13:13.119865       1 metrics.go:72] Registering metrics
	I1002 07:13:13.208874       1 controller.go:711] "Syncing nftables rules"
	I1002 07:13:22.922829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:22.922892       1 main.go:301] handling current node
	I1002 07:13:32.922862       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:32.922901       1 main.go:301] handling current node
	I1002 07:13:42.916843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:42.916881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ad64793002dd63e29f2e6d0c903589a03b0c6e995ae310ae36b85d4ee81c65b] <==
	I1002 07:14:17.763062       1 policy_source.go:240] refreshing policies
	I1002 07:14:17.763376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:14:17.763564       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:14:17.763609       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:14:17.763777       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 07:14:17.809707       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:14:17.822548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:14:18.445605       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:14:18.560451       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 07:14:18.773767       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:14:18.775445       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:14:18.783231       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:14:19.495814       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:14:19.631063       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:14:19.697525       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:14:19.704659       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:14:21.128106       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:14:31.174050       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.102.190"}
	I1002 07:14:37.116336       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.131.33"}
	I1002 07:14:58.634088       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.89.202"}
	I1002 07:18:57.713717       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.220.51"}
	I1002 07:24:17.732584       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:25:06.376567       1 controller.go:667] quota admission added evaluator for: namespaces
	I1002 07:25:06.629938       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.213.6"}
	I1002 07:25:06.673448       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.164.210"}
	
	
	==> kube-controller-manager [178c96249f9c11e548d1469eeefcd5de32442210f1af35a1b1c70bbcbb5caee9] <==
	I1002 07:14:21.150890       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:14:21.150986       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:14:21.151070       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:14:21.153607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:14:21.160880       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:14:21.166718       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:14:21.166892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:14:21.166733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:14:21.167035       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-630775"
	I1002 07:14:21.167125       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 07:14:21.167131       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:14:21.167498       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:14:21.170850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 07:14:21.173344       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:14:21.177670       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:14:21.191011       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:14:21.192087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1002 07:25:06.475280       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:25:06.484161       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:25:06.497580       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:25:06.511140       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:25:06.513967       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:25:06.524529       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:25:06.527775       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1002 07:25:06.530981       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [8ba344dad123377a72939202d6efa440d87b7663e4bd64b2dadc679537027ddf] <==
	I1002 07:14:02.006777       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1002 07:14:02.006886       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.007514       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1002 07:14:02.007744       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:14:02.007890       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.008312       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1002 07:14:02.008349       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I1002 07:14:02.008672       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1002 07:14:02.008694       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:14:02.008756       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.015394       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1002 07:14:02.015532       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1002 07:14:02.015851       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1002 07:14:02.015925       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1002 07:14:02.042270       1 controllermanager.go:781] "Started controller" controller="daemonset-controller"
	I1002 07:14:02.042638       1 daemon_controller.go:310] "Starting daemon sets controller" logger="daemonset-controller"
	I1002 07:14:02.042661       1 shared_informer.go:349] "Waiting for caches to sync" controller="daemon sets"
	I1002 07:14:02.069196       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1002 07:14:02.069427       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1002 07:14:02.069480       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrapproving"
	I1002 07:14:02.089131       1 controllermanager.go:781] "Started controller" controller="token-cleaner-controller"
	I1002 07:14:02.089615       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1002 07:14:02.089751       1 shared_informer.go:349] "Waiting for caches to sync" controller="token_cleaner"
	I1002 07:14:02.089871       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	F1002 07:14:03.126256       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/persistent-volume-binder": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [89b98ca1639a412e0dbaa8f47354c23f8c3711eaae363a9da73be6b9e81e25f3] <==
	I1002 07:13:53.078292       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:13:55.525782       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:55.699510       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:55.699620       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:13:55.699748       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:55.732249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:13:55.732366       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:55.736441       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:55.737057       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:55.737117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:55.738312       1 config.go:200] "Starting service config controller"
	I1002 07:13:55.738373       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:55.738414       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:55.738445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:55.738489       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:55.738518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:55.742586       1 config.go:309] "Starting node config controller"
	I1002 07:13:55.742643       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:55.742670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:55.838897       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:55.839113       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:55.839128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955] <==
	I1002 07:13:12.526794       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:13:12.615718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:12.716596       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:12.716633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:13:12.716937       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:12.738780       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:13:12.738843       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:12.742821       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:12.743324       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:12.743349       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:12.744994       1 config.go:200] "Starting service config controller"
	I1002 07:13:12.745017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:12.745035       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:12.745039       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:12.745050       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:12.745054       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:12.748972       1 config.go:309] "Starting node config controller"
	I1002 07:13:12.748999       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:12.749008       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:12.845522       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:12.845564       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:13:12.845753       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [004330dfe2dd26e68f7ba578cc7ac15e5d034dcb6e6707f60a375272ad35f422] <==
	I1002 07:14:01.072468       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:14:01.072514       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:14:01.075068       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:14:01.072527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.080823       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.176945       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:14:01.182234       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.185613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 07:14:17.541784       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:14:17.542079       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:14:17.542215       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:14:17.542331       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:14:17.542498       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:14:17.542694       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:14:17.542845       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:14:17.542983       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:14:17.543100       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:14:17.543254       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:14:17.543414       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:14:17.543555       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:14:17.543654       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:14:17.543823       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:14:17.594447       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:14:17.603388       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:14:17.603430       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405] <==
	E1002 07:13:04.274361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:13:04.275625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:13:04.275830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:13:04.284646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:13:04.285040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:13:04.285087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:13:04.285136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:13:04.285179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:13:04.285226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:13:04.285263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:13:04.285411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:13:04.285932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:13:04.285985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:13:04.286030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:13:04.286161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:13:04.286209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:13:04.286278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:13:04.287852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1002 07:13:05.566193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:51.964488       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 07:13:51.964595       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:13:51.964607       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:13:51.964625       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:51.965728       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:13:51.965750       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 07:29:11 functional-630775 kubelet[4707]: E1002 07:29:11.580674    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:29:15 functional-630775 kubelet[4707]: E1002 07:29:15.579593    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m5b2d" podUID="8885da45-c513-482f-9409-b6a5cf31d6d8"
	Oct 02 07:29:15 functional-630775 kubelet[4707]: E1002 07:29:15.581565    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hmtnk" podUID="57530e7d-72c9-475e-b0ca-b01a
14cfc54f"
	Oct 02 07:29:17 functional-630775 kubelet[4707]: E1002 07:29:17.578644    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:29:19 functional-630775 kubelet[4707]: E1002 07:29:19.578861    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:29:24 functional-630775 kubelet[4707]: E1002 07:29:24.579100    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:29:26 functional-630775 kubelet[4707]: E1002 07:29:26.579809    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m5b2d" podUID="8885da45-c513-482f-9409-b6a5cf31d6d8"
	Oct 02 07:29:27 functional-630775 kubelet[4707]: E1002 07:29:27.579639    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hmtnk" podUID="57530e7d-72c9-475e-b0ca-b01a
14cfc54f"
	Oct 02 07:29:29 functional-630775 kubelet[4707]: E1002 07:29:29.578845    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:29:30 functional-630775 kubelet[4707]: E1002 07:29:30.578414    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:29:36 functional-630775 kubelet[4707]: E1002 07:29:36.579063    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:29:39 functional-630775 kubelet[4707]: E1002 07:29:39.579483    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hmtnk" podUID="57530e7d-72c9-475e-b0ca-b01a
14cfc54f"
	Oct 02 07:29:40 functional-630775 kubelet[4707]: E1002 07:29:40.579017    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m5b2d" podUID="8885da45-c513-482f-9409-b6a5cf31d6d8"
	Oct 02 07:29:43 functional-630775 kubelet[4707]: E1002 07:29:43.579986    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:29:44 functional-630775 kubelet[4707]: E1002 07:29:44.123671    4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:29:44 functional-630775 kubelet[4707]: E1002 07:29:44.123729    4707 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:29:44 functional-630775 kubelet[4707]: E1002 07:29:44.123805    4707 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-sd479_default(7122762e-93c5-4612-aecc-f4ad583b342c): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:29:44 functional-630775 kubelet[4707]: E1002 07:29:44.123844    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:29:51 functional-630775 kubelet[4707]: E1002 07:29:51.579447    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:29:54 functional-630775 kubelet[4707]: E1002 07:29:54.579012    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m5b2d" podUID="8885da45-c513-482f-9409-b6a5cf31d6d8"
	Oct 02 07:29:54 functional-630775 kubelet[4707]: E1002 07:29:54.580830    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-hmtnk" podUID="57530e7d-72c9-475e-b0ca-b01a
14cfc54f"
	Oct 02 07:29:56 functional-630775 kubelet[4707]: E1002 07:29:56.578605    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:29:58 functional-630775 kubelet[4707]: E1002 07:29:58.578467    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:30:03 functional-630775 kubelet[4707]: E1002 07:30:03.578503    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:30:05 functional-630775 kubelet[4707]: E1002 07:30:05.581525    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-m5b2d" podUID="8885da45-c513-482f-9409-b6a5cf31d6d8"
	
	
	==> storage-provisioner [49a7d1907be92f523c680b46b2703bf050574d70e75ee55cd4658f2b84a344da] <==
	W1002 07:29:41.320574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:43.324095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:43.328301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:45.333644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:45.343824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:47.347217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:47.351547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:49.354852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:49.361840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:51.365262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:51.370045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:53.372950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:53.377542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:55.380544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:55.385418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:57.388706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:57.395327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:59.398532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:29:59.403576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:01.407759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:01.413296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:03.417146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:03.424388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:05.428006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:30:05.435536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686] <==
	W1002 07:13:25.679197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:27.682714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:27.690039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:29.693880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:29.702886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:31.707705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:31.715973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:33.719455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:33.725416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:35.728350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:35.735553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:37.738556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:37.744692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:39.747697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:39.754894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:41.760064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:41.766492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:43.770083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:43.774532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:45.779482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:45.786182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:47.789951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:47.794536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:49.797953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:49.805383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-630775 -n functional-630775
helpers_test.go:269: (dbg) Run:  kubectl --context functional-630775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sd479 hello-node-connect-7d85dfc575-xzj2s sp-pod dashboard-metrics-scraper-77bf4d6c4c-hmtnk kubernetes-dashboard-855c9754f9-m5b2d
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-630775 describe pod busybox-mount hello-node-75c85bcc94-sd479 hello-node-connect-7d85dfc575-xzj2s sp-pod dashboard-metrics-scraper-77bf4d6c4c-hmtnk kubernetes-dashboard-855c9754f9-m5b2d
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-630775 describe pod busybox-mount hello-node-75c85bcc94-sd479 hello-node-connect-7d85dfc575-xzj2s sp-pod dashboard-metrics-scraper-77bf4d6c4c-hmtnk kubernetes-dashboard-855c9754f9-m5b2d: exit status 1 (132.830129ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:48 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://f717db27060500b345df2438da1670b15506f88784670bcd5f0a53a4bae5e82c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:14:51 +0000
	      Finished:     Thu, 02 Oct 2025 07:14:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m6f8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-m6f8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15m   default-scheduler  Successfully assigned default/busybox-mount to functional-630775
	  Normal  Pulling    15m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.025s (2.025s including waiting). Image size: 1935750 bytes.
	  Normal  Created    15m   kubelet            Created container: mount-munger
	  Normal  Started    15m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-sd479
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:18:57 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbkb6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tbkb6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sd479 to functional-630775
	  Warning  Failed     9m35s (x3 over 11m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    8m14s (x5 over 11m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m13s (x5 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     8m13s (x2 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    62s (x43 over 11m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     62s (x43 over 11m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-xzj2s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:58 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqvgd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bqvgd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  15m                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzj2s to functional-630775
	  Warning  Failed     13m (x4 over 15m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)  kubelet            Error: ErrImagePull
	  Warning  Failed     12m                kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x64 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x64 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:54 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcjbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bcjbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/sp-pod to functional-630775
	  Warning  Failed     13m                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12m (x4 over 15m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x66 over 15m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     11s (x66 over 15m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-hmtnk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-m5b2d" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-630775 describe pod busybox-mount hello-node-75c85bcc94-sd479 hello-node-connect-7d85dfc575-xzj2s sp-pod dashboard-metrics-scraper-77bf4d6c4c-hmtnk kubernetes-dashboard-855c9754f9-m5b2d: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-630775 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-630775 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-xzj2s" [ec49cc62-edb5-44f4-8182-2f3ecfd5a092] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 07:16:08.756300  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:18:24.881439  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:18:52.598382  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-630775 -n functional-630775
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-02 07:24:58.982928893 +0000 UTC m=+2929.870276214
functional_test.go:1645: (dbg) Run:  kubectl --context functional-630775 describe po hello-node-connect-7d85dfc575-xzj2s -n default
functional_test.go:1645: (dbg) kubectl --context functional-630775 describe po hello-node-connect-7d85dfc575-xzj2s -n default:
Name:             hello-node-connect-7d85dfc575-xzj2s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-630775/192.168.49.2
Start Time:       Thu, 02 Oct 2025 07:14:58 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqvgd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bqvgd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzj2s to functional-630775
Warning  Failed     8m27s (x4 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     7m2s                  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-630775 logs hello-node-connect-7d85dfc575-xzj2s -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-630775 logs hello-node-connect-7d85dfc575-xzj2s -n default: exit status 1 (106.80373ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xzj2s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-630775 logs hello-node-connect-7d85dfc575-xzj2s -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-630775 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-xzj2s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-630775/192.168.49.2
Start Time:       Thu, 02 Oct 2025 07:14:58 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqvgd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-bqvgd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzj2s to functional-630775
Warning  Failed     8m27s (x4 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     7m2s                  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-630775 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-630775 logs -l app=hello-node-connect: exit status 1 (87.503812ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xzj2s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-630775 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-630775 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.89.202
IPs:                      10.109.89.202
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32187/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-630775
helpers_test.go:243: (dbg) docker inspect functional-630775:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65",
	        "Created": "2025-10-02T07:12:42.807200683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 857886,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:12:42.868034081Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/hostname",
	        "HostsPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/hosts",
	        "LogPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65-json.log",
	        "Name": "/functional-630775",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-630775:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-630775",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65",
	                "LowerDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-630775",
	                "Source": "/var/lib/docker/volumes/functional-630775/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-630775",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-630775",
	                "name.minikube.sigs.k8s.io": "functional-630775",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b61e4392fa5332ed827648e648c730efdd836e49a062819e890c14e7af22069",
	            "SandboxKey": "/var/run/docker/netns/7b61e4392fa5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-630775": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:e5:ea:59:a6:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "53c71bd34c60a004896ad1741793966c0aa2c75408be79d9661dcac532bd3113",
	                    "EndpointID": "113beb41032ff7995405d2b7630ce3ad773082757f0e8de5d718a56a03503484",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-630775",
	                        "59dc05e609c7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-630775 -n functional-630775
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 logs -n 25: (1.789403015s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdany-port1050225749/001:/mount-9p --alsologtostderr -v=1                   │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh -n functional-630775 sudo cat /home/docker/cp-test.txt                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ cp      │ functional-630775 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh -n functional-630775 sudo cat /tmp/does/not/exist/cp-test.txt                                               │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh -- ls -la /mount-9p                                                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh cat /mount-9p/test-1759389286511544596                                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh stat /mount-9p/created-by-test                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh stat /mount-9p/created-by-pod                                                                               │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh sudo umount -f /mount-9p                                                                                    │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdspecific-port2182164821/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh -- ls -la /mount-9p                                                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh sudo umount -f /mount-9p                                                                                    │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount1 --alsologtostderr -v=1                 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount3 --alsologtostderr -v=1                 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount2 --alsologtostderr -v=1                 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount1                                                                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh findmnt -T /mount2                                                                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh findmnt -T /mount3                                                                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ mount   │ -p functional-630775 --kill=true                                                                                                  │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ addons  │ functional-630775 addons list                                                                                                     │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ addons  │ functional-630775 addons list -o json                                                                                             │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:13:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:13:41.671373  862172 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:13:41.671557  862172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:13:41.671561  862172 out.go:374] Setting ErrFile to fd 2...
	I1002 07:13:41.671565  862172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:13:41.671879  862172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:13:41.672253  862172 out.go:368] Setting JSON to false
	I1002 07:13:41.673222  862172 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24971,"bootTime":1759364251,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 07:13:41.673281  862172 start.go:140] virtualization:  
	I1002 07:13:41.677205  862172 out.go:179] * [functional-630775] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:13:41.681464  862172 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:13:41.681576  862172 notify.go:220] Checking for updates...
	I1002 07:13:41.690686  862172 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:13:41.693403  862172 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:13:41.696233  862172 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 07:13:41.699138  862172 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:13:41.701918  862172 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:13:41.705297  862172 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:13:41.705392  862172 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:13:41.735946  862172 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:13:41.736063  862172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:13:41.805958  862172 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 07:13:41.796612313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:13:41.806056  862172 docker.go:318] overlay module found
	I1002 07:13:41.809169  862172 out.go:179] * Using the docker driver based on existing profile
	I1002 07:13:41.811963  862172 start.go:304] selected driver: docker
	I1002 07:13:41.811972  862172 start.go:924] validating driver "docker" against &{Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:13:41.812079  862172 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:13:41.812182  862172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:13:41.874409  862172 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 07:13:41.865166486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:13:41.874846  862172 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:13:41.874871  862172 cni.go:84] Creating CNI manager for ""
	I1002 07:13:41.874926  862172 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:13:41.874963  862172 start.go:348] cluster config:
	{Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:13:41.879737  862172 out.go:179] * Starting "functional-630775" primary control-plane node in "functional-630775" cluster
	I1002 07:13:41.882670  862172 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 07:13:41.885580  862172 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:13:41.888386  862172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 07:13:41.888413  862172 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:13:41.888435  862172 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 07:13:41.888452  862172 cache.go:58] Caching tarball of preloaded images
	I1002 07:13:41.888541  862172 preload.go:233] Found /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 07:13:41.888550  862172 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 07:13:41.888658  862172 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/config.json ...
	I1002 07:13:41.908679  862172 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:13:41.908690  862172 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:13:41.908718  862172 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:13:41.908739  862172 start.go:360] acquireMachinesLock for functional-630775: {Name:mk33e4813bde53d334369ebfc46df1c5523ece98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:13:41.908841  862172 start.go:364] duration metric: took 85.29µs to acquireMachinesLock for "functional-630775"
	I1002 07:13:41.908862  862172 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:13:41.908873  862172 fix.go:54] fixHost starting: 
	I1002 07:13:41.909145  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:13:41.926545  862172 fix.go:112] recreateIfNeeded on functional-630775: state=Running err=<nil>
	W1002 07:13:41.926565  862172 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:13:41.929753  862172 out.go:252] * Updating the running docker "functional-630775" container ...
	I1002 07:13:41.929779  862172 machine.go:93] provisionDockerMachine start ...
	I1002 07:13:41.929911  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:41.948094  862172 main.go:141] libmachine: Using SSH client type: native
	I1002 07:13:41.948487  862172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1002 07:13:41.948495  862172 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:13:42.103204  862172 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-630775
	
	I1002 07:13:42.103221  862172 ubuntu.go:182] provisioning hostname "functional-630775"
	I1002 07:13:42.103298  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:42.138908  862172 main.go:141] libmachine: Using SSH client type: native
	I1002 07:13:42.139237  862172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1002 07:13:42.139248  862172 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-630775 && echo "functional-630775" | sudo tee /etc/hostname
	I1002 07:13:42.330129  862172 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-630775
	
	I1002 07:13:42.330233  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:42.351394  862172 main.go:141] libmachine: Using SSH client type: native
	I1002 07:13:42.351724  862172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1002 07:13:42.351740  862172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-630775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-630775/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-630775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:13:42.485011  862172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:13:42.485028  862172 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-811293/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-811293/.minikube}
	I1002 07:13:42.485045  862172 ubuntu.go:190] setting up certificates
	I1002 07:13:42.485053  862172 provision.go:84] configureAuth start
	I1002 07:13:42.485113  862172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-630775
	I1002 07:13:42.502768  862172 provision.go:143] copyHostCerts
	I1002 07:13:42.502830  862172 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem, removing ...
	I1002 07:13:42.502846  862172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem
	I1002 07:13:42.502916  862172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem (1078 bytes)
	I1002 07:13:42.503010  862172 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem, removing ...
	I1002 07:13:42.503013  862172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem
	I1002 07:13:42.503033  862172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem (1123 bytes)
	I1002 07:13:42.503080  862172 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem, removing ...
	I1002 07:13:42.503083  862172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem
	I1002 07:13:42.503100  862172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem (1679 bytes)
	I1002 07:13:42.503200  862172 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem org=jenkins.functional-630775 san=[127.0.0.1 192.168.49.2 functional-630775 localhost minikube]
	I1002 07:13:43.203949  862172 provision.go:177] copyRemoteCerts
	I1002 07:13:43.204001  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:13:43.204084  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.225902  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.321390  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:13:43.340321  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 07:13:43.358636  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:13:43.375984  862172 provision.go:87] duration metric: took 890.908503ms to configureAuth
	I1002 07:13:43.376001  862172 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:13:43.376197  862172 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:13:43.376203  862172 machine.go:96] duration metric: took 1.446419567s to provisionDockerMachine
	I1002 07:13:43.376209  862172 start.go:293] postStartSetup for "functional-630775" (driver="docker")
	I1002 07:13:43.376218  862172 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:13:43.376277  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:13:43.376349  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.393667  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.488825  862172 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:13:43.492256  862172 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:13:43.492277  862172 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:13:43.492288  862172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/addons for local assets ...
	I1002 07:13:43.492342  862172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/files for local assets ...
	I1002 07:13:43.492421  862172 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem -> 8131552.pem in /etc/ssl/certs
	I1002 07:13:43.492492  862172 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/test/nested/copy/813155/hosts -> hosts in /etc/test/nested/copy/813155
	I1002 07:13:43.492535  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/813155
	I1002 07:13:43.500166  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem --> /etc/ssl/certs/8131552.pem (1708 bytes)
	I1002 07:13:43.518492  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/test/nested/copy/813155/hosts --> /etc/test/nested/copy/813155/hosts (40 bytes)
	I1002 07:13:43.537488  862172 start.go:296] duration metric: took 161.254139ms for postStartSetup
	I1002 07:13:43.537560  862172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:13:43.537614  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.554575  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.645867  862172 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:13:43.650517  862172 fix.go:56] duration metric: took 1.741645188s for fixHost
	I1002 07:13:43.650531  862172 start.go:83] releasing machines lock for "functional-630775", held for 1.741681856s
	I1002 07:13:43.650595  862172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-630775
	I1002 07:13:43.666484  862172 ssh_runner.go:195] Run: cat /version.json
	I1002 07:13:43.666525  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.666542  862172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:13:43.666591  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.690289  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.693611  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.875700  862172 ssh_runner.go:195] Run: systemctl --version
	I1002 07:13:43.882168  862172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:13:43.886396  862172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:13:43.886458  862172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:13:43.894092  862172 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:13:43.894106  862172 start.go:495] detecting cgroup driver to use...
	I1002 07:13:43.894136  862172 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:13:43.894182  862172 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 07:13:43.915105  862172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 07:13:43.930110  862172 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:13:43.930161  862172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:13:43.946127  862172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:13:43.964154  862172 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:13:44.103128  862172 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:13:44.241633  862172 docker.go:234] disabling docker service ...
	I1002 07:13:44.241702  862172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:13:44.256712  862172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:13:44.270114  862172 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:13:44.411561  862172 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:13:44.552667  862172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:13:44.565749  862172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:13:44.580175  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 07:13:44.589457  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 07:13:44.598152  862172 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 07:13:44.598208  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 07:13:44.606602  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:13:44.615008  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 07:13:44.623308  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:13:44.631596  862172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:13:44.640347  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 07:13:44.649477  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 07:13:44.658402  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 07:13:44.667181  862172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:13:44.678217  862172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:13:44.686401  862172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:13:44.816261  862172 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 07:13:45.293009  862172 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 07:13:45.293106  862172 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 07:13:45.300476  862172 start.go:563] Will wait 60s for crictl version
	I1002 07:13:45.300561  862172 ssh_runner.go:195] Run: which crictl
	I1002 07:13:45.312446  862172 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:13:45.370835  862172 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 07:13:45.370957  862172 ssh_runner.go:195] Run: containerd --version
	I1002 07:13:45.410700  862172 ssh_runner.go:195] Run: containerd --version
	I1002 07:13:45.446315  862172 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 07:13:45.449317  862172 cli_runner.go:164] Run: docker network inspect functional-630775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:13:45.465254  862172 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:13:45.472181  862172 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 07:13:45.475046  862172 kubeadm.go:883] updating cluster {Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:13:45.475169  862172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 07:13:45.475251  862172 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:13:45.502481  862172 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 07:13:45.502493  862172 containerd.go:534] Images already preloaded, skipping extraction
	I1002 07:13:45.502567  862172 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:13:45.527833  862172 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 07:13:45.527845  862172 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:13:45.527851  862172 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 containerd true true} ...
	I1002 07:13:45.527968  862172 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-630775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:13:45.528041  862172 ssh_runner.go:195] Run: sudo crictl info
	I1002 07:13:45.560599  862172 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 07:13:45.560619  862172 cni.go:84] Creating CNI manager for ""
	I1002 07:13:45.560628  862172 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:13:45.560634  862172 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:13:45.560668  862172 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-630775 NodeName:functional-630775 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:13:45.560816  862172 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-630775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:13:45.560880  862172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:13:45.574010  862172 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:13:45.574074  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:13:45.582806  862172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1002 07:13:45.596548  862172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:13:45.610207  862172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2080 bytes)
	I1002 07:13:45.624154  862172 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:13:45.628249  862172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:13:45.776055  862172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:13:45.794033  862172 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775 for IP: 192.168.49.2
	I1002 07:13:45.794043  862172 certs.go:195] generating shared ca certs ...
	I1002 07:13:45.794057  862172 certs.go:227] acquiring lock for ca certs: {Name:mk33b75296d4c02eee9bab3e9582ce8896a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:13:45.794191  862172 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key
	I1002 07:13:45.794237  862172 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key
	I1002 07:13:45.794243  862172 certs.go:257] generating profile certs ...
	I1002 07:13:45.794318  862172 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.key
	I1002 07:13:45.794359  862172 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/apiserver.key.e26a4211
	I1002 07:13:45.794393  862172 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/proxy-client.key
	I1002 07:13:45.794499  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/813155.pem (1338 bytes)
	W1002 07:13:45.794525  862172 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-811293/.minikube/certs/813155_empty.pem, impossibly tiny 0 bytes
	I1002 07:13:45.794532  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:13:45.794558  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:13:45.794576  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:13:45.794596  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem (1679 bytes)
	I1002 07:13:45.794634  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem (1708 bytes)
	I1002 07:13:45.795320  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:13:45.818726  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 07:13:45.837939  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:13:45.855497  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 07:13:45.873965  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 07:13:45.893106  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:13:45.912548  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:13:45.930281  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:13:45.947458  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem --> /usr/share/ca-certificates/8131552.pem (1708 bytes)
	I1002 07:13:45.970451  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:13:45.988937  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/813155.pem --> /usr/share/ca-certificates/813155.pem (1338 bytes)
	I1002 07:13:46.009160  862172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:13:46.023113  862172 ssh_runner.go:195] Run: openssl version
	I1002 07:13:46.029819  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8131552.pem && ln -fs /usr/share/ca-certificates/8131552.pem /etc/ssl/certs/8131552.pem"
	I1002 07:13:46.038518  862172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8131552.pem
	I1002 07:13:46.042669  862172 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 07:12 /usr/share/ca-certificates/8131552.pem
	I1002 07:13:46.042737  862172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8131552.pem
	I1002 07:13:46.084384  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8131552.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:13:46.092564  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:13:46.101206  862172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:13:46.104937  862172 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:13:46.104995  862172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:13:46.146161  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:13:46.154023  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/813155.pem && ln -fs /usr/share/ca-certificates/813155.pem /etc/ssl/certs/813155.pem"
	I1002 07:13:46.162421  862172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/813155.pem
	I1002 07:13:46.166214  862172 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 07:12 /usr/share/ca-certificates/813155.pem
	I1002 07:13:46.166282  862172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/813155.pem
	I1002 07:13:46.207486  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/813155.pem /etc/ssl/certs/51391683.0"
	I1002 07:13:46.215451  862172 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:13:46.219209  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:13:46.260101  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:13:46.300983  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:13:46.342139  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:13:46.382585  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:13:46.423145  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:13:46.463859  862172 kubeadm.go:400] StartCluster: {Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:13:46.463944  862172 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 07:13:46.464017  862172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:13:46.492948  862172 cri.go:89] found id: "3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f"
	I1002 07:13:46.492959  862172 cri.go:89] found id: "f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686"
	I1002 07:13:46.492963  862172 cri.go:89] found id: "c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c"
	I1002 07:13:46.492966  862172 cri.go:89] found id: "8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955"
	I1002 07:13:46.492969  862172 cri.go:89] found id: "03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405"
	I1002 07:13:46.492971  862172 cri.go:89] found id: "cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41"
	I1002 07:13:46.492974  862172 cri.go:89] found id: "1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd"
	I1002 07:13:46.492976  862172 cri.go:89] found id: "ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd"
	I1002 07:13:46.492979  862172 cri.go:89] found id: ""
	I1002 07:13:46.493030  862172 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1002 07:13:46.520677  862172 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405","pid":1455,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405/rootfs","created":"2025-10-02T07:13:00.102187952Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1317376370f0584960a49fd6cb04f45f"},"owner":"root"},{"ociVersion":"1.2.1","id":"059f411532ccb919c
5415f369baaabda7d12733f8305f2100bba69fd1470856b","pid":2107,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b/rootfs","created":"2025-10-02T07:13:23.546497546Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-prnlg_e558cf31-ab09-4f11-b02b-7193532b2d6a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-prnlg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e558cf31-ab09-4f11-b
02b-7193532b2d6a"},"owner":"root"},{"ociVersion":"1.2.1","id":"0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","pid":1753,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60/rootfs","created":"2025-10-02T07:13:12.284944156Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-q2985_ec1b36de-bb7c-407f-914c-e1ee91f5371a","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-q2985","io.kubernetes.cri.sandbox-namespace":"kube-
system","io.kubernetes.cri.sandbox-uid":"ec1b36de-bb7c-407f-914c-e1ee91f5371a"},"owner":"root"},{"ociVersion":"1.2.1","id":"1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd","pid":1374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd/rootfs","created":"2025-10-02T07:12:59.925612135Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","io.kubernetes.cri.sandbox-name":"etcd-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e3ba43411ab897fd13f8222729a6dcf3"},"owner":"root"},{"ociVersion":"1.2.1","id":"29714899aa
30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","pid":1286,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a/rootfs","created":"2025-10-02T07:12:59.769636723Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-630775_1317376370f0584960a49fd6cb04f45f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"131737637
0f0584960a49fd6cb04f45f"},"owner":"root"},{"ociVersion":"1.2.1","id":"300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","pid":2051,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce/rootfs","created":"2025-10-02T07:13:23.476638606Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_098a0d88-d8f7-44bc-9b2b-448769c02475","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":
"kube-system","io.kubernetes.cri.sandbox-uid":"098a0d88-d8f7-44bc-9b2b-448769c02475"},"owner":"root"},{"ociVersion":"1.2.1","id":"3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f","pid":2179,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f/rootfs","created":"2025-10-02T07:13:23.679402548Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-prnlg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e558cf31-ab09-4f11-b02b-7193532b2d6a"},"owner":"root"},{"ociVersion
":"1.2.1","id":"7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","pid":1265,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad/rootfs","created":"2025-10-02T07:12:59.75205182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-630775_a8e04e9b0f0b64e0f0e12bbc6b34672f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system"
,"io.kubernetes.cri.sandbox-uid":"a8e04e9b0f0b64e0f0e12bbc6b34672f"},"owner":"root"},{"ociVersion":"1.2.1","id":"8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955","pid":1788,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955/rootfs","created":"2025-10-02T07:13:12.45479525Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","io.kubernetes.cri.sandbox-name":"kube-proxy-9nzx4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"061b14c9-b276-4aa3-96f2-b5a112fade93"},"owner":"root"},{"ociVersion":"1.2.1","id":"ad703d64a5ee
86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","pid":1714,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855/rootfs","created":"2025-10-02T07:13:12.240628159Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-9nzx4_061b14c9-b276-4aa3-96f2-b5a112fade93","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-9nzx4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"061b14c9-b276-4aa3-96f2-b5a112fade93"},"o
wner":"root"},{"ociVersion":"1.2.1","id":"b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","pid":1275,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31/rootfs","created":"2025-10-02T07:12:59.751239332Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-630775_88272a6a98f81ff09ea3b44b1394376a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-630775","io.kubernetes.cri.sandbox-namespace":"kub
e-system","io.kubernetes.cri.sandbox-uid":"88272a6a98f81ff09ea3b44b1394376a"},"owner":"root"},{"ociVersion":"1.2.1","id":"b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","pid":1248,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9/rootfs","created":"2025-10-02T07:12:59.730148156Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-630775_e3ba43411ab897fd13f8222729a6dcf3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-f
unctional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e3ba43411ab897fd13f8222729a6dcf3"},"owner":"root"},{"ociVersion":"1.2.1","id":"c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c","pid":1811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c/rootfs","created":"2025-10-02T07:13:12.510926181Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","io.kubernetes.cri.sandbox-name":"kindnet-q2985","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ec1b36de-bb7c-40
7f-914c-e1ee91f5371a"},"owner":"root"},{"ociVersion":"1.2.1","id":"ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd","pid":1352,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd/rootfs","created":"2025-10-02T07:12:59.898256882Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88272a6a98f81ff09ea3b44b1394376a"},"owner":"root"},{"ociVersion":"1.2.1","id":"cf05f437077f64106799182946cfcdfa1e3e8
24a91a1380626bc5f83e8fdca41","pid":1433,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41/rootfs","created":"2025-10-02T07:13:00.013519867Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a8e04e9b0f0b64e0f0e12bbc6b34672f"},"owner":"root"},{"ociVersion":"1.2.1","id":"f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686","pid":2142,"status":"running","bundle":"/run/con
tainerd/io.containerd.runtime.v2.task/k8s.io/f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686/rootfs","created":"2025-10-02T07:13:23.594203341Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"098a0d88-d8f7-44bc-9b2b-448769c02475"},"owner":"root"}]
	I1002 07:13:46.520993  862172 cri.go:126] list returned 16 containers
	I1002 07:13:46.521001  862172 cri.go:129] container: {ID:03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 Status:running}
	I1002 07:13:46.521019  862172 cri.go:135] skipping {03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 running}: state = "running", want "paused"
	I1002 07:13:46.521030  862172 cri.go:129] container: {ID:059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b Status:running}
	I1002 07:13:46.521036  862172 cri.go:131] skipping 059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b - not in ps
	I1002 07:13:46.521040  862172 cri.go:129] container: {ID:0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60 Status:running}
	I1002 07:13:46.521044  862172 cri.go:131] skipping 0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60 - not in ps
	I1002 07:13:46.521047  862172 cri.go:129] container: {ID:1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd Status:running}
	I1002 07:13:46.521052  862172 cri.go:135] skipping {1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd running}: state = "running", want "paused"
	I1002 07:13:46.521057  862172 cri.go:129] container: {ID:29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a Status:running}
	I1002 07:13:46.521062  862172 cri.go:131] skipping 29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a - not in ps
	I1002 07:13:46.521065  862172 cri.go:129] container: {ID:300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce Status:running}
	I1002 07:13:46.521069  862172 cri.go:131] skipping 300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce - not in ps
	I1002 07:13:46.521072  862172 cri.go:129] container: {ID:3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f Status:running}
	I1002 07:13:46.521077  862172 cri.go:135] skipping {3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f running}: state = "running", want "paused"
	I1002 07:13:46.521081  862172 cri.go:129] container: {ID:7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad Status:running}
	I1002 07:13:46.521084  862172 cri.go:131] skipping 7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad - not in ps
	I1002 07:13:46.521086  862172 cri.go:129] container: {ID:8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 Status:running}
	I1002 07:13:46.521092  862172 cri.go:135] skipping {8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 running}: state = "running", want "paused"
	I1002 07:13:46.521096  862172 cri.go:129] container: {ID:ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855 Status:running}
	I1002 07:13:46.521101  862172 cri.go:131] skipping ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855 - not in ps
	I1002 07:13:46.521103  862172 cri.go:129] container: {ID:b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31 Status:running}
	I1002 07:13:46.521108  862172 cri.go:131] skipping b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31 - not in ps
	I1002 07:13:46.521111  862172 cri.go:129] container: {ID:b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9 Status:running}
	I1002 07:13:46.521115  862172 cri.go:131] skipping b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9 - not in ps
	I1002 07:13:46.521118  862172 cri.go:129] container: {ID:c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c Status:running}
	I1002 07:13:46.521123  862172 cri.go:135] skipping {c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c running}: state = "running", want "paused"
	I1002 07:13:46.521126  862172 cri.go:129] container: {ID:ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd Status:running}
	I1002 07:13:46.521131  862172 cri.go:135] skipping {ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd running}: state = "running", want "paused"
	I1002 07:13:46.521135  862172 cri.go:129] container: {ID:cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 Status:running}
	I1002 07:13:46.521139  862172 cri.go:135] skipping {cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 running}: state = "running", want "paused"
	I1002 07:13:46.521143  862172 cri.go:129] container: {ID:f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 Status:running}
	I1002 07:13:46.521150  862172 cri.go:135] skipping {f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 running}: state = "running", want "paused"
	I1002 07:13:46.521204  862172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:13:46.529142  862172 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:13:46.529151  862172 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:13:46.529199  862172 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:13:46.536684  862172 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:13:46.537212  862172 kubeconfig.go:125] found "functional-630775" server: "https://192.168.49.2:8441"
	I1002 07:13:46.538573  862172 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:13:46.546318  862172 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 07:12:50.564522661 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 07:13:45.619378224 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 07:13:46.546326  862172 kubeadm.go:1160] stopping kube-system containers ...
	I1002 07:13:46.546337  862172 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1002 07:13:46.546389  862172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:13:46.576731  862172 cri.go:89] found id: "3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f"
	I1002 07:13:46.576743  862172 cri.go:89] found id: "f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686"
	I1002 07:13:46.576746  862172 cri.go:89] found id: "c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c"
	I1002 07:13:46.576749  862172 cri.go:89] found id: "8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955"
	I1002 07:13:46.576752  862172 cri.go:89] found id: "03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405"
	I1002 07:13:46.576754  862172 cri.go:89] found id: "cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41"
	I1002 07:13:46.576776  862172 cri.go:89] found id: "1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd"
	I1002 07:13:46.576779  862172 cri.go:89] found id: "ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd"
	I1002 07:13:46.576782  862172 cri.go:89] found id: ""
	I1002 07:13:46.576786  862172 cri.go:252] Stopping containers: [3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c 8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd]
	I1002 07:13:46.576842  862172 ssh_runner.go:195] Run: which crictl
	I1002 07:13:46.580717  862172 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c 8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd
	I1002 07:14:09.260854  862172 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c 8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd: (22.680100624s)
	I1002 07:14:09.260914  862172 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 07:14:09.356787  862172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:14:09.364851  862172 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 07:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 07:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 07:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 07:12 /etc/kubernetes/scheduler.conf
	
	I1002 07:14:09.364926  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 07:14:09.372718  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 07:14:09.380925  862172 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:14:09.380983  862172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:14:09.388424  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 07:14:09.396153  862172 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:14:09.396207  862172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:14:09.403866  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 07:14:09.411486  862172 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:14:09.411541  862172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:14:09.419031  862172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:14:09.426969  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:09.477718  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.144343  862172 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.666597981s)
	I1002 07:14:11.144409  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.378088  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.451044  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.531208  862172 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:14:11.531273  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:12.032021  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:12.531343  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:12.553208  862172 api_server.go:72] duration metric: took 1.022009892s to wait for apiserver process to appear ...
	I1002 07:14:12.553222  862172 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:14:12.553240  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:17.553922  862172 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 07:14:17.553946  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:17.595327  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 07:14:17.595343  862172 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 07:14:18.054016  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:18.062320  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:14:18.062351  862172 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:14:18.553915  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:18.566171  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:14:18.566189  862172 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:14:19.053755  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:19.062039  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 07:14:19.075884  862172 api_server.go:141] control plane version: v1.34.1
	I1002 07:14:19.075901  862172 api_server.go:131] duration metric: took 6.522674439s to wait for apiserver health ...
	I1002 07:14:19.075909  862172 cni.go:84] Creating CNI manager for ""
	I1002 07:14:19.075914  862172 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:14:19.079225  862172 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 07:14:19.082076  862172 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 07:14:19.086181  862172 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 07:14:19.086191  862172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 07:14:19.099261  862172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 07:14:19.503118  862172 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:14:19.506726  862172 system_pods.go:59] 8 kube-system pods found
	I1002 07:14:19.506747  862172 system_pods.go:61] "coredns-66bc5c9577-prnlg" [e558cf31-ab09-4f11-b02b-7193532b2d6a] Running
	I1002 07:14:19.506752  862172 system_pods.go:61] "etcd-functional-630775" [54789ff5-5de9-4d26-a2ed-016f2d213969] Running
	I1002 07:14:19.506755  862172 system_pods.go:61] "kindnet-q2985" [ec1b36de-bb7c-407f-914c-e1ee91f5371a] Running
	I1002 07:14:19.506762  862172 system_pods.go:61] "kube-apiserver-functional-630775" [da2e8b84-4a5a-440e-a363-d5e49f3b063d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:14:19.506769  862172 system_pods.go:61] "kube-controller-manager-functional-630775" [7a0b359b-590f-4d60-b4ff-db5abdb995dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:14:19.506774  862172 system_pods.go:61] "kube-proxy-9nzx4" [061b14c9-b276-4aa3-96f2-b5a112fade93] Running
	I1002 07:14:19.506778  862172 system_pods.go:61] "kube-scheduler-functional-630775" [a9da3f11-fb1d-4914-9e27-bc0ed0031000] Running
	I1002 07:14:19.506781  862172 system_pods.go:61] "storage-provisioner" [098a0d88-d8f7-44bc-9b2b-448769c02475] Running
	I1002 07:14:19.506786  862172 system_pods.go:74] duration metric: took 3.658563ms to wait for pod list to return data ...
	I1002 07:14:19.506792  862172 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:14:19.509368  862172 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:14:19.509385  862172 node_conditions.go:123] node cpu capacity is 2
	I1002 07:14:19.509395  862172 node_conditions.go:105] duration metric: took 2.59932ms to run NodePressure ...
	I1002 07:14:19.509455  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:19.761828  862172 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 07:14:19.765664  862172 kubeadm.go:743] kubelet initialised
	I1002 07:14:19.765675  862172 kubeadm.go:744] duration metric: took 3.833797ms waiting for restarted kubelet to initialise ...
	I1002 07:14:19.765688  862172 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 07:14:19.787866  862172 ops.go:34] apiserver oom_adj: -16
	I1002 07:14:19.787878  862172 kubeadm.go:601] duration metric: took 33.258722304s to restartPrimaryControlPlane
	I1002 07:14:19.787885  862172 kubeadm.go:402] duration metric: took 33.324039011s to StartCluster
	I1002 07:14:19.787899  862172 settings.go:142] acquiring lock: {Name:mkfabb257d5e6dc89516b7f3eecfb5ad470245b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:14:19.787965  862172 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:14:19.788635  862172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/kubeconfig: {Name:mk61b1a16c6c070d43ba1e4fed7f7f8861077db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:14:19.788918  862172 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 07:14:19.789227  862172 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:14:19.789276  862172 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:14:19.789359  862172 addons.go:69] Setting storage-provisioner=true in profile "functional-630775"
	I1002 07:14:19.789366  862172 addons.go:69] Setting default-storageclass=true in profile "functional-630775"
	I1002 07:14:19.789370  862172 addons.go:238] Setting addon storage-provisioner=true in "functional-630775"
	W1002 07:14:19.789376  862172 addons.go:247] addon storage-provisioner should already be in state true
	I1002 07:14:19.789380  862172 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-630775"
	I1002 07:14:19.789394  862172 host.go:66] Checking if "functional-630775" exists ...
	I1002 07:14:19.789709  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:14:19.789928  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:14:19.796485  862172 out.go:179] * Verifying Kubernetes components...
	I1002 07:14:19.799724  862172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:14:19.831000  862172 addons.go:238] Setting addon default-storageclass=true in "functional-630775"
	W1002 07:14:19.831011  862172 addons.go:247] addon default-storageclass should already be in state true
	I1002 07:14:19.831032  862172 host.go:66] Checking if "functional-630775" exists ...
	I1002 07:14:19.831436  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:14:19.834614  862172 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:14:19.837838  862172 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:14:19.837854  862172 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:14:19.837915  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:14:19.854010  862172 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:14:19.854022  862172 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:14:19.854080  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:14:19.884793  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:14:19.912028  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:14:20.095555  862172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:14:20.111918  862172 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:14:20.116246  862172 node_ready.go:35] waiting up to 6m0s for node "functional-630775" to be "Ready" ...
	I1002 07:14:20.122714  862172 node_ready.go:49] node "functional-630775" is "Ready"
	I1002 07:14:20.122732  862172 node_ready.go:38] duration metric: took 6.465863ms for node "functional-630775" to be "Ready" ...
	I1002 07:14:20.122744  862172 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:14:20.122800  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:20.148198  862172 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:14:20.902287  862172 api_server.go:72] duration metric: took 1.113343492s to wait for apiserver process to appear ...
	I1002 07:14:20.902298  862172 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:14:20.902316  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:20.912126  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 07:14:20.913105  862172 api_server.go:141] control plane version: v1.34.1
	I1002 07:14:20.913118  862172 api_server.go:131] duration metric: took 10.81484ms to wait for apiserver health ...
	I1002 07:14:20.913125  862172 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:14:20.914304  862172 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 07:14:20.916689  862172 system_pods.go:59] 8 kube-system pods found
	I1002 07:14:20.916704  862172 system_pods.go:61] "coredns-66bc5c9577-prnlg" [e558cf31-ab09-4f11-b02b-7193532b2d6a] Running
	I1002 07:14:20.916709  862172 system_pods.go:61] "etcd-functional-630775" [54789ff5-5de9-4d26-a2ed-016f2d213969] Running
	I1002 07:14:20.916713  862172 system_pods.go:61] "kindnet-q2985" [ec1b36de-bb7c-407f-914c-e1ee91f5371a] Running
	I1002 07:14:20.916719  862172 system_pods.go:61] "kube-apiserver-functional-630775" [da2e8b84-4a5a-440e-a363-d5e49f3b063d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:14:20.916725  862172 system_pods.go:61] "kube-controller-manager-functional-630775" [7a0b359b-590f-4d60-b4ff-db5abdb995dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:14:20.916729  862172 system_pods.go:61] "kube-proxy-9nzx4" [061b14c9-b276-4aa3-96f2-b5a112fade93] Running
	I1002 07:14:20.916733  862172 system_pods.go:61] "kube-scheduler-functional-630775" [a9da3f11-fb1d-4914-9e27-bc0ed0031000] Running
	I1002 07:14:20.916736  862172 system_pods.go:61] "storage-provisioner" [098a0d88-d8f7-44bc-9b2b-448769c02475] Running
	I1002 07:14:20.916742  862172 system_pods.go:74] duration metric: took 3.611106ms to wait for pod list to return data ...
	I1002 07:14:20.916749  862172 default_sa.go:34] waiting for default service account to be created ...
	I1002 07:14:20.917317  862172 addons.go:514] duration metric: took 1.128039601s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 07:14:20.921851  862172 default_sa.go:45] found service account: "default"
	I1002 07:14:20.921864  862172 default_sa.go:55] duration metric: took 5.110626ms for default service account to be created ...
	I1002 07:14:20.921871  862172 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 07:14:20.925617  862172 system_pods.go:86] 8 kube-system pods found
	I1002 07:14:20.925632  862172 system_pods.go:89] "coredns-66bc5c9577-prnlg" [e558cf31-ab09-4f11-b02b-7193532b2d6a] Running
	I1002 07:14:20.925638  862172 system_pods.go:89] "etcd-functional-630775" [54789ff5-5de9-4d26-a2ed-016f2d213969] Running
	I1002 07:14:20.925641  862172 system_pods.go:89] "kindnet-q2985" [ec1b36de-bb7c-407f-914c-e1ee91f5371a] Running
	I1002 07:14:20.925648  862172 system_pods.go:89] "kube-apiserver-functional-630775" [da2e8b84-4a5a-440e-a363-d5e49f3b063d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:14:20.925653  862172 system_pods.go:89] "kube-controller-manager-functional-630775" [7a0b359b-590f-4d60-b4ff-db5abdb995dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:14:20.925658  862172 system_pods.go:89] "kube-proxy-9nzx4" [061b14c9-b276-4aa3-96f2-b5a112fade93] Running
	I1002 07:14:20.925663  862172 system_pods.go:89] "kube-scheduler-functional-630775" [a9da3f11-fb1d-4914-9e27-bc0ed0031000] Running
	I1002 07:14:20.925666  862172 system_pods.go:89] "storage-provisioner" [098a0d88-d8f7-44bc-9b2b-448769c02475] Running
	I1002 07:14:20.925672  862172 system_pods.go:126] duration metric: took 3.796308ms to wait for k8s-apps to be running ...
	I1002 07:14:20.925678  862172 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 07:14:20.925733  862172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:14:20.948257  862172 system_svc.go:56] duration metric: took 22.570082ms WaitForService to wait for kubelet
	I1002 07:14:20.948274  862172 kubeadm.go:586] duration metric: took 1.159335084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:14:20.948290  862172 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:14:20.958158  862172 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:14:20.958173  862172 node_conditions.go:123] node cpu capacity is 2
	I1002 07:14:20.958182  862172 node_conditions.go:105] duration metric: took 9.888033ms to run NodePressure ...
	I1002 07:14:20.958194  862172 start.go:241] waiting for startup goroutines ...
	I1002 07:14:20.958202  862172 start.go:246] waiting for cluster config update ...
	I1002 07:14:20.958211  862172 start.go:255] writing updated cluster config ...
	I1002 07:14:20.958502  862172 ssh_runner.go:195] Run: rm -f paused
	I1002 07:14:20.962320  862172 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:14:20.973046  862172 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prnlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:20.987079  862172 pod_ready.go:94] pod "coredns-66bc5c9577-prnlg" is "Ready"
	I1002 07:14:20.987104  862172 pod_ready.go:86] duration metric: took 14.025987ms for pod "coredns-66bc5c9577-prnlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:20.994730  862172 pod_ready.go:83] waiting for pod "etcd-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:21.004386  862172 pod_ready.go:94] pod "etcd-functional-630775" is "Ready"
	I1002 07:14:21.004403  862172 pod_ready.go:86] duration metric: took 9.659747ms for pod "etcd-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:21.007414  862172 pod_ready.go:83] waiting for pod "kube-apiserver-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:14:23.013583  862172 pod_ready.go:104] pod "kube-apiserver-functional-630775" is not "Ready", error: <nil>
	I1002 07:14:24.512876  862172 pod_ready.go:94] pod "kube-apiserver-functional-630775" is "Ready"
	I1002 07:14:24.512890  862172 pod_ready.go:86] duration metric: took 3.50546221s for pod "kube-apiserver-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:24.515170  862172 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:14:26.520108  862172 pod_ready.go:104] pod "kube-controller-manager-functional-630775" is not "Ready", error: <nil>
	I1002 07:14:27.520641  862172 pod_ready.go:94] pod "kube-controller-manager-functional-630775" is "Ready"
	I1002 07:14:27.520656  862172 pod_ready.go:86] duration metric: took 3.005471687s for pod "kube-controller-manager-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.522900  862172 pod_ready.go:83] waiting for pod "kube-proxy-9nzx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.527167  862172 pod_ready.go:94] pod "kube-proxy-9nzx4" is "Ready"
	I1002 07:14:27.527180  862172 pod_ready.go:86] duration metric: took 4.267526ms for pod "kube-proxy-9nzx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.529513  862172 pod_ready.go:83] waiting for pod "kube-scheduler-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.766495  862172 pod_ready.go:94] pod "kube-scheduler-functional-630775" is "Ready"
	I1002 07:14:27.766510  862172 pod_ready.go:86] duration metric: took 236.984661ms for pod "kube-scheduler-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.766520  862172 pod_ready.go:40] duration metric: took 6.804180892s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:14:27.820304  862172 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:14:27.823366  862172 out.go:179] * Done! kubectl is now configured to use "functional-630775" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f717db2706050       1611cd07b61d5       10 minutes ago      Exited              mount-munger              0                   a0764ff8470de       busybox-mount                               default
	9c9c69562b8f2       35f3cbee4fb77       10 minutes ago      Running             nginx                     0                   e8dc22f25086e       nginx-svc                                   default
	6ad64793002dd       43911e833d64d       10 minutes ago      Running             kube-apiserver            0                   3762e74a92c10       kube-apiserver-functional-630775            kube-system
	178c96249f9c1       7eb2c6ff0c5a7       10 minutes ago      Running             kube-controller-manager   2                   7c212a34d95f5       kube-controller-manager-functional-630775   kube-system
	602bfadd7763e       a1894772a478e       11 minutes ago      Running             etcd                      1                   b2957b80a9860       etcd-functional-630775                      kube-system
	004330dfe2dd2       b5f57ec6b9867       11 minutes ago      Running             kube-scheduler            1                   29714899aa300       kube-scheduler-functional-630775            kube-system
	8ba344dad1233       7eb2c6ff0c5a7       11 minutes ago      Exited              kube-controller-manager   1                   7c212a34d95f5       kube-controller-manager-functional-630775   kube-system
	49a7d1907be92       ba04bb24b9575       11 minutes ago      Running             storage-provisioner       1                   300b99d061daa       storage-provisioner                         kube-system
	89b98ca1639a4       05baa95f5142d       11 minutes ago      Running             kube-proxy                1                   ad703d64a5ee8       kube-proxy-9nzx4                            kube-system
	8dcd88165ca08       b1a8c6f707935       11 minutes ago      Running             kindnet-cni               1                   0b1223812eb7c       kindnet-q2985                               kube-system
	9d5e48870ba26       138784d87c9c5       11 minutes ago      Running             coredns                   1                   059f411532ccb       coredns-66bc5c9577-prnlg                    kube-system
	3fbb4a2c8d8bc       138784d87c9c5       11 minutes ago      Exited              coredns                   0                   059f411532ccb       coredns-66bc5c9577-prnlg                    kube-system
	f53adf4e4cfba       ba04bb24b9575       11 minutes ago      Exited              storage-provisioner       0                   300b99d061daa       storage-provisioner                         kube-system
	c55ee29bbc732       b1a8c6f707935       11 minutes ago      Exited              kindnet-cni               0                   0b1223812eb7c       kindnet-q2985                               kube-system
	8ca7e16833e24       05baa95f5142d       11 minutes ago      Exited              kube-proxy                0                   ad703d64a5ee8       kube-proxy-9nzx4                            kube-system
	03ee97847e6c7       b5f57ec6b9867       12 minutes ago      Exited              kube-scheduler            0                   29714899aa300       kube-scheduler-functional-630775            kube-system
	1bd4dd24c2653       a1894772a478e       12 minutes ago      Exited              etcd                      0                   b2957b80a9860       etcd-functional-630775                      kube-system
	
	
	==> containerd <==
	Oct 02 07:20:31 functional-630775 containerd[3593]: time="2025-10-02T07:20:31.580377336Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 02 07:20:31 functional-630775 containerd[3593]: time="2025-10-02T07:20:31.583094807Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:20:31 functional-630775 containerd[3593]: time="2025-10-02T07:20:31.718034881Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:20:32 functional-630775 containerd[3593]: time="2025-10-02T07:20:32.013881520Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:20:32 functional-630775 containerd[3593]: time="2025-10-02T07:20:32.013941104Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 02 07:20:32 functional-630775 containerd[3593]: time="2025-10-02T07:20:32.015378165Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:20:32 functional-630775 containerd[3593]: time="2025-10-02T07:20:32.017802606Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:20:32 functional-630775 containerd[3593]: time="2025-10-02T07:20:32.169330012Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:20:32 functional-630775 containerd[3593]: time="2025-10-02T07:20:32.464562307Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:20:32 functional-630775 containerd[3593]: time="2025-10-02T07:20:32.464666681Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 02 07:20:37 functional-630775 containerd[3593]: time="2025-10-02T07:20:37.581868305Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:20:37 functional-630775 containerd[3593]: time="2025-10-02T07:20:37.584228476Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:20:37 functional-630775 containerd[3593]: time="2025-10-02T07:20:37.715451024Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:20:38 functional-630775 containerd[3593]: time="2025-10-02T07:20:38.013141400Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:20:38 functional-630775 containerd[3593]: time="2025-10-02T07:20:38.013179257Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 02 07:21:53 functional-630775 containerd[3593]: time="2025-10-02T07:21:53.579114328Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:21:53 functional-630775 containerd[3593]: time="2025-10-02T07:21:53.581862781Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:21:53 functional-630775 containerd[3593]: time="2025-10-02T07:21:53.708705589Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:21:54 functional-630775 containerd[3593]: time="2025-10-02T07:21:54.105574929Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:21:54 functional-630775 containerd[3593]: time="2025-10-02T07:21:54.105693359Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=11740"
	Oct 02 07:24:36 functional-630775 containerd[3593]: time="2025-10-02T07:24:36.579597661Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:24:36 functional-630775 containerd[3593]: time="2025-10-02T07:24:36.581922016Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:24:36 functional-630775 containerd[3593]: time="2025-10-02T07:24:36.715027900Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:24:37 functional-630775 containerd[3593]: time="2025-10-02T07:24:37.132057293Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:24:37 functional-630775 containerd[3593]: time="2025-10-02T07:24:37.132147088Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=11740"
	
	
	==> coredns [3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38638 - 28631 "HINFO IN 8890447447211847089.1590523317042042169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012992148s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d5e48870ba26acf37929a5697515a9c28c95aa154630492e8a65ff7db1cbe96] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56466 - 63142 "HINFO IN 2250469666875045467.358806669876498839. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016943821s
	
	
	==> describe nodes <==
	Name:               functional-630775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-630775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-630775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_13_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:13:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-630775
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:25:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:24:50 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:24:50 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:24:50 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:24:50 +0000   Thu, 02 Oct 2025 07:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-630775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cadd65090dff457dbb73450103633ff2
	  System UUID:                6a9d513c-1640-40d4-8a86-98c871c3750d
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sd479                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  default                     hello-node-connect-7d85dfc575-xzj2s          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-prnlg                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-functional-630775                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kindnet-q2985                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-630775             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-630775    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-9nzx4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-630775             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m                kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                node-controller  Node functional-630775 event: Registered Node functional-630775 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-630775 status is now: NodeReady
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-630775 event: Registered Node functional-630775 in Controller
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd] <==
	{"level":"warn","ts":"2025-10-02T07:13:02.525498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.545863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.572157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.598265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.615644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.641510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.739888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36202","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:13:52.114867Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:13:52.114936Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-630775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T07:13:52.115053Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:59.121770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:59.123526Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.123603Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T07:13:59.123856Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T07:13:59.123880Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124602Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124651Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:59.124661Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124700Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:59.124727Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.127456Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T07:13:59.127532Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.127611Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T07:13:59.127622Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-630775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [602bfadd7763eb054613d766eab0c38eff37bc1a71150682c8892da1032e031a] <==
	{"level":"warn","ts":"2025-10-02T07:14:16.516995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.533988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.557989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.567822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.580883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.596396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.612225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.626910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.645890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.661313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.676336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.692121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.707254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.724915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.741439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.757272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.777617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.788071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.820394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.836544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.851858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.929953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:24:15.884828Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1066}
	{"level":"info","ts":"2025-10-02T07:24:15.908169Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1066,"took":"23.017939ms","hash":2107747004,"current-db-size-bytes":3158016,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1327104,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-02T07:24:15.908227Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2107747004,"revision":1066,"compact-revision":-1}
	
	
	==> kernel <==
	 07:25:01 up  7:07,  0 user,  load average: 0.04, 0.32, 0.68
	Linux functional-630775 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8dcd88165ca08da5e62e74301de3c24c91e43ad60914ede11e5bfc04c0dcfff6] <==
	I1002 07:22:53.241635       1 main.go:301] handling current node
	I1002 07:23:03.242347       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:23:03.242384       1 main.go:301] handling current node
	I1002 07:23:13.249684       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:23:13.249720       1 main.go:301] handling current node
	I1002 07:23:23.245824       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:23:23.245859       1 main.go:301] handling current node
	I1002 07:23:33.242326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:23:33.242363       1 main.go:301] handling current node
	I1002 07:23:43.241279       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:23:43.241320       1 main.go:301] handling current node
	I1002 07:23:53.241536       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:23:53.241759       1 main.go:301] handling current node
	I1002 07:24:03.244695       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:03.244730       1 main.go:301] handling current node
	I1002 07:24:13.241334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:13.241372       1 main.go:301] handling current node
	I1002 07:24:23.248264       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:23.248304       1 main.go:301] handling current node
	I1002 07:24:33.244827       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:33.244871       1 main.go:301] handling current node
	I1002 07:24:43.241651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:43.241688       1 main.go:301] handling current node
	I1002 07:24:53.242908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:24:53.242951       1 main.go:301] handling current node
	
	
	==> kindnet [c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c] <==
	I1002 07:13:12.711039       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:13:12.711295       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 07:13:12.711458       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:13:12.711478       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:13:12.711488       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:13:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:13:12.915994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:13:12.916209       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:13:12.916310       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:13:12.919193       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 07:13:13.119841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:13:13.119865       1 metrics.go:72] Registering metrics
	I1002 07:13:13.208874       1 controller.go:711] "Syncing nftables rules"
	I1002 07:13:22.922829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:22.922892       1 main.go:301] handling current node
	I1002 07:13:32.922862       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:32.922901       1 main.go:301] handling current node
	I1002 07:13:42.916843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:42.916881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ad64793002dd63e29f2e6d0c903589a03b0c6e995ae310ae36b85d4ee81c65b] <==
	I1002 07:14:17.694281       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:14:17.735482       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 07:14:17.762063       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:14:17.763062       1 policy_source.go:240] refreshing policies
	I1002 07:14:17.763376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:14:17.763564       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:14:17.763609       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:14:17.763777       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 07:14:17.809707       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:14:17.822548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:14:18.445605       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:14:18.560451       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 07:14:18.773767       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:14:18.775445       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:14:18.783231       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:14:19.495814       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:14:19.631063       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:14:19.697525       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:14:19.704659       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:14:21.128106       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:14:31.174050       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.102.190"}
	I1002 07:14:37.116336       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.131.33"}
	I1002 07:14:58.634088       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.89.202"}
	I1002 07:18:57.713717       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.220.51"}
	I1002 07:24:17.732584       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [178c96249f9c11e548d1469eeefcd5de32442210f1af35a1b1c70bbcbb5caee9] <==
	I1002 07:14:21.130160       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:14:21.132258       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 07:14:21.138828       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:14:21.141423       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 07:14:21.145794       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 07:14:21.149245       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:14:21.150621       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:14:21.150784       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:14:21.150890       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:14:21.150986       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:14:21.151070       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:14:21.153607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:14:21.160880       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:14:21.166718       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:14:21.166892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:14:21.166733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:14:21.167035       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-630775"
	I1002 07:14:21.167125       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 07:14:21.167131       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:14:21.167498       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:14:21.170850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 07:14:21.173344       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:14:21.177670       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:14:21.191011       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:14:21.192087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [8ba344dad123377a72939202d6efa440d87b7663e4bd64b2dadc679537027ddf] <==
	I1002 07:14:02.006777       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1002 07:14:02.006886       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.007514       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1002 07:14:02.007744       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:14:02.007890       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.008312       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1002 07:14:02.008349       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I1002 07:14:02.008672       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1002 07:14:02.008694       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:14:02.008756       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.015394       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1002 07:14:02.015532       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1002 07:14:02.015851       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1002 07:14:02.015925       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1002 07:14:02.042270       1 controllermanager.go:781] "Started controller" controller="daemonset-controller"
	I1002 07:14:02.042638       1 daemon_controller.go:310] "Starting daemon sets controller" logger="daemonset-controller"
	I1002 07:14:02.042661       1 shared_informer.go:349] "Waiting for caches to sync" controller="daemon sets"
	I1002 07:14:02.069196       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1002 07:14:02.069427       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1002 07:14:02.069480       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrapproving"
	I1002 07:14:02.089131       1 controllermanager.go:781] "Started controller" controller="token-cleaner-controller"
	I1002 07:14:02.089615       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1002 07:14:02.089751       1 shared_informer.go:349] "Waiting for caches to sync" controller="token_cleaner"
	I1002 07:14:02.089871       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	F1002 07:14:03.126256       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/persistent-volume-binder": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [89b98ca1639a412e0dbaa8f47354c23f8c3711eaae363a9da73be6b9e81e25f3] <==
	I1002 07:13:53.078292       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:13:55.525782       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:55.699510       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:55.699620       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:13:55.699748       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:55.732249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:13:55.732366       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:55.736441       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:55.737057       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:55.737117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:55.738312       1 config.go:200] "Starting service config controller"
	I1002 07:13:55.738373       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:55.738414       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:55.738445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:55.738489       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:55.738518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:55.742586       1 config.go:309] "Starting node config controller"
	I1002 07:13:55.742643       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:55.742670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:55.838897       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:55.839113       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:55.839128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955] <==
	I1002 07:13:12.526794       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:13:12.615718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:12.716596       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:12.716633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:13:12.716937       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:12.738780       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:13:12.738843       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:12.742821       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:12.743324       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:12.743349       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:12.744994       1 config.go:200] "Starting service config controller"
	I1002 07:13:12.745017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:12.745035       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:12.745039       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:12.745050       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:12.745054       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:12.748972       1 config.go:309] "Starting node config controller"
	I1002 07:13:12.748999       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:12.749008       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:12.845522       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:12.845564       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:13:12.845753       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [004330dfe2dd26e68f7ba578cc7ac15e5d034dcb6e6707f60a375272ad35f422] <==
	I1002 07:14:01.072468       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:14:01.072514       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:14:01.075068       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:14:01.072527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.080823       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.176945       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:14:01.182234       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.185613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 07:14:17.541784       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:14:17.542079       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:14:17.542215       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:14:17.542331       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:14:17.542498       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:14:17.542694       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:14:17.542845       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:14:17.542983       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:14:17.543100       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:14:17.543254       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:14:17.543414       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:14:17.543555       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:14:17.543654       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:14:17.543823       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:14:17.594447       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:14:17.603388       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:14:17.603430       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405] <==
	E1002 07:13:04.274361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:13:04.275625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:13:04.275830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:13:04.284646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:13:04.285040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:13:04.285087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:13:04.285136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:13:04.285179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:13:04.285226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:13:04.285263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:13:04.285411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:13:04.285932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:13:04.285985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:13:04.286030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:13:04.286161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:13:04.286209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:13:04.286278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:13:04.287852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1002 07:13:05.566193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:51.964488       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 07:13:51.964595       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:13:51.964607       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:13:51.964625       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:51.965728       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:13:51.965750       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 07:23:33 functional-630775 kubelet[4707]: E1002 07:23:33.578900    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:23:35 functional-630775 kubelet[4707]: E1002 07:23:35.578978    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:23:43 functional-630775 kubelet[4707]: E1002 07:23:43.579087    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:23:46 functional-630775 kubelet[4707]: E1002 07:23:46.579161    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:23:47 functional-630775 kubelet[4707]: E1002 07:23:47.579009    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:23:54 functional-630775 kubelet[4707]: E1002 07:23:54.578951    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:23:58 functional-630775 kubelet[4707]: E1002 07:23:58.579194    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:23:59 functional-630775 kubelet[4707]: E1002 07:23:59.578936    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:24:05 functional-630775 kubelet[4707]: E1002 07:24:05.579455    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:24:10 functional-630775 kubelet[4707]: E1002 07:24:10.579210    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:24:10 functional-630775 kubelet[4707]: E1002 07:24:10.579373    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:24:19 functional-630775 kubelet[4707]: E1002 07:24:19.578605    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:24:21 functional-630775 kubelet[4707]: E1002 07:24:21.579303    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:24:22 functional-630775 kubelet[4707]: E1002 07:24:22.579295    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:24:31 functional-630775 kubelet[4707]: E1002 07:24:31.579835    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:24:34 functional-630775 kubelet[4707]: E1002 07:24:34.578529    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:24:37 functional-630775 kubelet[4707]: E1002 07:24:37.132308    4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:24:37 functional-630775 kubelet[4707]: E1002 07:24:37.132376    4707 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:24:37 functional-630775 kubelet[4707]: E1002 07:24:37.132470    4707 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-sd479_default(7122762e-93c5-4612-aecc-f4ad583b342c): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:24:37 functional-630775 kubelet[4707]: E1002 07:24:37.132814    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:24:45 functional-630775 kubelet[4707]: E1002 07:24:45.579698    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:24:48 functional-630775 kubelet[4707]: E1002 07:24:48.578563    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:24:48 functional-630775 kubelet[4707]: E1002 07:24:48.578844    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:24:59 functional-630775 kubelet[4707]: E1002 07:24:59.580169    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sd479" podUID="7122762e-93c5-4612-aecc-f4ad583b342c"
	Oct 02 07:25:00 functional-630775 kubelet[4707]: E1002 07:25:00.579648    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	
	
	==> storage-provisioner [49a7d1907be92f523c680b46b2703bf050574d70e75ee55cd4658f2b84a344da] <==
	W1002 07:24:35.877767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:37.881820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:37.888587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:39.892329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:39.896716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:41.899947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:41.906495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:43.909666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:43.914465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:45.918079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:45.922548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:47.927264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:47.934376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:49.938066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:49.943099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:51.951244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:51.956195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:53.959871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:53.964574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:55.968827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:55.976055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:57.980252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:57.989161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:59.992572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:24:59.998042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686] <==
	W1002 07:13:25.679197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:27.682714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:27.690039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:29.693880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:29.702886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:31.707705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:31.715973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:33.719455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:33.725416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:35.728350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:35.735553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:37.738556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:37.744692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:39.747697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:39.754894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:41.760064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:41.766492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:43.770083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:43.774532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:45.779482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:45.786182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:47.789951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:47.794536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:49.797953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:49.805383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-630775 -n functional-630775
helpers_test.go:269: (dbg) Run:  kubectl --context functional-630775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sd479 hello-node-connect-7d85dfc575-xzj2s sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-630775 describe pod busybox-mount hello-node-75c85bcc94-sd479 hello-node-connect-7d85dfc575-xzj2s sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-630775 describe pod busybox-mount hello-node-75c85bcc94-sd479 hello-node-connect-7d85dfc575-xzj2s sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:48 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://f717db27060500b345df2438da1670b15506f88784670bcd5f0a53a4bae5e82c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:14:51 +0000
	      Finished:     Thu, 02 Oct 2025 07:14:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m6f8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-m6f8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-630775
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.025s (2.025s including waiting). Image size: 1935750 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-sd479
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:18:57 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbkb6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tbkb6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m4s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sd479 to functional-630775
	  Warning  Failed     4m30s (x3 over 6m4s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m9s (x5 over 6m4s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m8s (x5 over 6m4s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m8s (x2 over 5m22s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     63s (x20 over 6m3s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    52s (x21 over 6m3s)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-xzj2s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:58 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqvgd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bqvgd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzj2s to functional-630775
	  Warning  Failed     8m30s (x4 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m6s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m5s                 kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x42 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x42 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:54 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcjbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bcjbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/sp-pod to functional-630775
	  Warning  Failed     8m46s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m20s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m20s (x4 over 10m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m20s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m57s (x22 over 10m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m57s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.95s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (249.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [098a0d88-d8f7-44bc-9b2b-448769c02475] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002970856s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-630775 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-630775 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-630775 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-630775 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [09853e6d-5c6c-4130-ae9a-981e745f8548] Pending
helpers_test.go:352: "sp-pod" [09853e6d-5c6c-4130-ae9a-981e745f8548] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-630775 -n functional-630775
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-02 07:18:54.839887979 +0000 UTC m=+2565.727235308
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-630775 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-630775 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-630775/192.168.49.2
Start Time:       Thu, 02 Oct 2025 07:14:54 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcjbh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-bcjbh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/sp-pod to functional-630775
Warning  Failed     2m38s                 kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    72s (x5 over 4m)      kubelet            Pulling image "docker.io/nginx"
Warning  Failed     72s (x4 over 3m59s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     72s (x5 over 3m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    10s (x15 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     10s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-630775 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-630775 logs sp-pod -n default: exit status 1 (106.670118ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-630775 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-630775
helpers_test.go:243: (dbg) docker inspect functional-630775:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65",
	        "Created": "2025-10-02T07:12:42.807200683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 857886,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T07:12:42.868034081Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/hostname",
	        "HostsPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/hosts",
	        "LogPath": "/var/lib/docker/containers/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65/59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65-json.log",
	        "Name": "/functional-630775",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-630775:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-630775",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59dc05e609c7c4837e246806eca0fe8f9b3606ba988da9f66c952818fe722f65",
	                "LowerDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95-init/diff:/var/lib/docker/overlay2/f1b2a52495d4d5d1e70fc487fac677b5080c5f1320773666a738aa42def3e2df/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ee5851bb423e8d4afd58101115aaa9be3505e47116b0270d9c2fb3a1ef3bc95/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-630775",
	                "Source": "/var/lib/docker/volumes/functional-630775/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-630775",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-630775",
	                "name.minikube.sigs.k8s.io": "functional-630775",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b61e4392fa5332ed827648e648c730efdd836e49a062819e890c14e7af22069",
	            "SandboxKey": "/var/run/docker/netns/7b61e4392fa5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33878"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33879"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33882"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33880"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33881"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-630775": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:e5:ea:59:a6:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "53c71bd34c60a004896ad1741793966c0aa2c75408be79d9661dcac532bd3113",
	                    "EndpointID": "113beb41032ff7995405d2b7630ce3ad773082757f0e8de5d718a56a03503484",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-630775",
	                        "59dc05e609c7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-630775 -n functional-630775
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 logs -n 25: (1.469232838s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdany-port1050225749/001:/mount-9p --alsologtostderr -v=1                   │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh -n functional-630775 sudo cat /home/docker/cp-test.txt                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ cp      │ functional-630775 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh -n functional-630775 sudo cat /tmp/does/not/exist/cp-test.txt                                               │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh -- ls -la /mount-9p                                                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh cat /mount-9p/test-1759389286511544596                                                                      │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh stat /mount-9p/created-by-test                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh stat /mount-9p/created-by-pod                                                                               │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh sudo umount -f /mount-9p                                                                                    │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdspecific-port2182164821/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh -- ls -la /mount-9p                                                                                         │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh sudo umount -f /mount-9p                                                                                    │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount1 --alsologtostderr -v=1                 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount3 --alsologtostderr -v=1                 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ mount   │ -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount2 --alsologtostderr -v=1                 │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ ssh     │ functional-630775 ssh findmnt -T /mount1                                                                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh findmnt -T /mount2                                                                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ ssh     │ functional-630775 ssh findmnt -T /mount3                                                                                          │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ mount   │ -p functional-630775 --kill=true                                                                                                  │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │                     │
	│ addons  │ functional-630775 addons list                                                                                                     │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	│ addons  │ functional-630775 addons list -o json                                                                                             │ functional-630775 │ jenkins │ v1.37.0 │ 02 Oct 25 07:14 UTC │ 02 Oct 25 07:14 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 07:13:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 07:13:41.671373  862172 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:13:41.671557  862172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:13:41.671561  862172 out.go:374] Setting ErrFile to fd 2...
	I1002 07:13:41.671565  862172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:13:41.671879  862172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:13:41.672253  862172 out.go:368] Setting JSON to false
	I1002 07:13:41.673222  862172 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":24971,"bootTime":1759364251,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 07:13:41.673281  862172 start.go:140] virtualization:  
	I1002 07:13:41.677205  862172 out.go:179] * [functional-630775] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:13:41.681464  862172 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:13:41.681576  862172 notify.go:220] Checking for updates...
	I1002 07:13:41.690686  862172 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:13:41.693403  862172 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:13:41.696233  862172 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 07:13:41.699138  862172 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:13:41.701918  862172 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:13:41.705297  862172 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:13:41.705392  862172 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:13:41.735946  862172 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:13:41.736063  862172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:13:41.805958  862172 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 07:13:41.796612313 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:13:41.806056  862172 docker.go:318] overlay module found
	I1002 07:13:41.809169  862172 out.go:179] * Using the docker driver based on existing profile
	I1002 07:13:41.811963  862172 start.go:304] selected driver: docker
	I1002 07:13:41.811972  862172 start.go:924] validating driver "docker" against &{Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:13:41.812079  862172 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:13:41.812182  862172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:13:41.874409  862172 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-02 07:13:41.865166486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:13:41.874846  862172 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:13:41.874871  862172 cni.go:84] Creating CNI manager for ""
	I1002 07:13:41.874926  862172 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:13:41.874963  862172 start.go:348] cluster config:
	{Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:13:41.879737  862172 out.go:179] * Starting "functional-630775" primary control-plane node in "functional-630775" cluster
	I1002 07:13:41.882670  862172 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 07:13:41.885580  862172 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 07:13:41.888386  862172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 07:13:41.888413  862172 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 07:13:41.888435  862172 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1002 07:13:41.888452  862172 cache.go:58] Caching tarball of preloaded images
	I1002 07:13:41.888541  862172 preload.go:233] Found /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1002 07:13:41.888550  862172 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1002 07:13:41.888658  862172 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/config.json ...
	I1002 07:13:41.908679  862172 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 07:13:41.908690  862172 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 07:13:41.908718  862172 cache.go:232] Successfully downloaded all kic artifacts
	I1002 07:13:41.908739  862172 start.go:360] acquireMachinesLock for functional-630775: {Name:mk33e4813bde53d334369ebfc46df1c5523ece98 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 07:13:41.908841  862172 start.go:364] duration metric: took 85.29µs to acquireMachinesLock for "functional-630775"
	I1002 07:13:41.908862  862172 start.go:96] Skipping create...Using existing machine configuration
	I1002 07:13:41.908873  862172 fix.go:54] fixHost starting: 
	I1002 07:13:41.909145  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:13:41.926545  862172 fix.go:112] recreateIfNeeded on functional-630775: state=Running err=<nil>
	W1002 07:13:41.926565  862172 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 07:13:41.929753  862172 out.go:252] * Updating the running docker "functional-630775" container ...
	I1002 07:13:41.929779  862172 machine.go:93] provisionDockerMachine start ...
	I1002 07:13:41.929911  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:41.948094  862172 main.go:141] libmachine: Using SSH client type: native
	I1002 07:13:41.948487  862172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1002 07:13:41.948495  862172 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 07:13:42.103204  862172 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-630775
	
	I1002 07:13:42.103221  862172 ubuntu.go:182] provisioning hostname "functional-630775"
	I1002 07:13:42.103298  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:42.138908  862172 main.go:141] libmachine: Using SSH client type: native
	I1002 07:13:42.139237  862172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1002 07:13:42.139248  862172 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-630775 && echo "functional-630775" | sudo tee /etc/hostname
	I1002 07:13:42.330129  862172 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-630775
	
	I1002 07:13:42.330233  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:42.351394  862172 main.go:141] libmachine: Using SSH client type: native
	I1002 07:13:42.351724  862172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33878 <nil> <nil>}
	I1002 07:13:42.351740  862172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-630775' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-630775/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-630775' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 07:13:42.485011  862172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 07:13:42.485028  862172 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21643-811293/.minikube CaCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21643-811293/.minikube}
	I1002 07:13:42.485045  862172 ubuntu.go:190] setting up certificates
	I1002 07:13:42.485053  862172 provision.go:84] configureAuth start
	I1002 07:13:42.485113  862172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-630775
	I1002 07:13:42.502768  862172 provision.go:143] copyHostCerts
	I1002 07:13:42.502830  862172 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem, removing ...
	I1002 07:13:42.502846  862172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem
	I1002 07:13:42.502916  862172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/ca.pem (1078 bytes)
	I1002 07:13:42.503010  862172 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem, removing ...
	I1002 07:13:42.503013  862172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem
	I1002 07:13:42.503033  862172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/cert.pem (1123 bytes)
	I1002 07:13:42.503080  862172 exec_runner.go:144] found /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem, removing ...
	I1002 07:13:42.503083  862172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem
	I1002 07:13:42.503100  862172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21643-811293/.minikube/key.pem (1679 bytes)
	I1002 07:13:42.503200  862172 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem org=jenkins.functional-630775 san=[127.0.0.1 192.168.49.2 functional-630775 localhost minikube]
	I1002 07:13:43.203949  862172 provision.go:177] copyRemoteCerts
	I1002 07:13:43.204001  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 07:13:43.204084  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.225902  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.321390  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1002 07:13:43.340321  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 07:13:43.358636  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 07:13:43.375984  862172 provision.go:87] duration metric: took 890.908503ms to configureAuth
	I1002 07:13:43.376001  862172 ubuntu.go:206] setting minikube options for container-runtime
	I1002 07:13:43.376197  862172 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:13:43.376203  862172 machine.go:96] duration metric: took 1.446419567s to provisionDockerMachine
	I1002 07:13:43.376209  862172 start.go:293] postStartSetup for "functional-630775" (driver="docker")
	I1002 07:13:43.376218  862172 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 07:13:43.376277  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 07:13:43.376349  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.393667  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.488825  862172 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 07:13:43.492256  862172 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 07:13:43.492277  862172 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 07:13:43.492288  862172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/addons for local assets ...
	I1002 07:13:43.492342  862172 filesync.go:126] Scanning /home/jenkins/minikube-integration/21643-811293/.minikube/files for local assets ...
	I1002 07:13:43.492421  862172 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem -> 8131552.pem in /etc/ssl/certs
	I1002 07:13:43.492492  862172 filesync.go:149] local asset: /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/test/nested/copy/813155/hosts -> hosts in /etc/test/nested/copy/813155
	I1002 07:13:43.492535  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/813155
	I1002 07:13:43.500166  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem --> /etc/ssl/certs/8131552.pem (1708 bytes)
	I1002 07:13:43.518492  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/test/nested/copy/813155/hosts --> /etc/test/nested/copy/813155/hosts (40 bytes)
	I1002 07:13:43.537488  862172 start.go:296] duration metric: took 161.254139ms for postStartSetup
	I1002 07:13:43.537560  862172 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:13:43.537614  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.554575  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.645867  862172 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 07:13:43.650517  862172 fix.go:56] duration metric: took 1.741645188s for fixHost
	I1002 07:13:43.650531  862172 start.go:83] releasing machines lock for "functional-630775", held for 1.741681856s
	I1002 07:13:43.650595  862172 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-630775
	I1002 07:13:43.666484  862172 ssh_runner.go:195] Run: cat /version.json
	I1002 07:13:43.666525  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.666542  862172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 07:13:43.666591  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:13:43.690289  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.693611  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:13:43.875700  862172 ssh_runner.go:195] Run: systemctl --version
	I1002 07:13:43.882168  862172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 07:13:43.886396  862172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 07:13:43.886458  862172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 07:13:43.894092  862172 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 07:13:43.894106  862172 start.go:495] detecting cgroup driver to use...
	I1002 07:13:43.894136  862172 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1002 07:13:43.894182  862172 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 07:13:43.915105  862172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 07:13:43.930110  862172 docker.go:218] disabling cri-docker service (if available) ...
	I1002 07:13:43.930161  862172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 07:13:43.946127  862172 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 07:13:43.964154  862172 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 07:13:44.103128  862172 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 07:13:44.241633  862172 docker.go:234] disabling docker service ...
	I1002 07:13:44.241702  862172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 07:13:44.256712  862172 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 07:13:44.270114  862172 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 07:13:44.411561  862172 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 07:13:44.552667  862172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 07:13:44.565749  862172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 07:13:44.580175  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1002 07:13:44.589457  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 07:13:44.598152  862172 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 07:13:44.598208  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 07:13:44.606602  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:13:44.615008  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 07:13:44.623308  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 07:13:44.631596  862172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 07:13:44.640347  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 07:13:44.649477  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1002 07:13:44.658402  862172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1002 07:13:44.667181  862172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 07:13:44.678217  862172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 07:13:44.686401  862172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:13:44.816261  862172 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 07:13:45.293009  862172 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1002 07:13:45.293106  862172 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1002 07:13:45.300476  862172 start.go:563] Will wait 60s for crictl version
	I1002 07:13:45.300561  862172 ssh_runner.go:195] Run: which crictl
	I1002 07:13:45.312446  862172 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 07:13:45.370835  862172 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1002 07:13:45.370957  862172 ssh_runner.go:195] Run: containerd --version
	I1002 07:13:45.410700  862172 ssh_runner.go:195] Run: containerd --version
	I1002 07:13:45.446315  862172 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1002 07:13:45.449317  862172 cli_runner.go:164] Run: docker network inspect functional-630775 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 07:13:45.465254  862172 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 07:13:45.472181  862172 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 07:13:45.475046  862172 kubeadm.go:883] updating cluster {Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 07:13:45.475169  862172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1002 07:13:45.475251  862172 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:13:45.502481  862172 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 07:13:45.502493  862172 containerd.go:534] Images already preloaded, skipping extraction
	I1002 07:13:45.502567  862172 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 07:13:45.527833  862172 containerd.go:627] all images are preloaded for containerd runtime.
	I1002 07:13:45.527845  862172 cache_images.go:85] Images are preloaded, skipping loading
	I1002 07:13:45.527851  862172 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 containerd true true} ...
	I1002 07:13:45.527968  862172 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-630775 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 07:13:45.528041  862172 ssh_runner.go:195] Run: sudo crictl info
	I1002 07:13:45.560599  862172 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 07:13:45.560619  862172 cni.go:84] Creating CNI manager for ""
	I1002 07:13:45.560628  862172 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:13:45.560634  862172 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 07:13:45.560668  862172 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-630775 NodeName:functional-630775 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 07:13:45.560816  862172 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-630775"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 07:13:45.560880  862172 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 07:13:45.574010  862172 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 07:13:45.574074  862172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 07:13:45.582806  862172 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1002 07:13:45.596548  862172 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 07:13:45.610207  862172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2080 bytes)
	I1002 07:13:45.624154  862172 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 07:13:45.628249  862172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:13:45.776055  862172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:13:45.794033  862172 certs.go:69] Setting up /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775 for IP: 192.168.49.2
	I1002 07:13:45.794043  862172 certs.go:195] generating shared ca certs ...
	I1002 07:13:45.794057  862172 certs.go:227] acquiring lock for ca certs: {Name:mk33b75296d4c02eee9bab3e9582ce8896a2d7b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:13:45.794191  862172 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key
	I1002 07:13:45.794237  862172 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key
	I1002 07:13:45.794243  862172 certs.go:257] generating profile certs ...
	I1002 07:13:45.794318  862172 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.key
	I1002 07:13:45.794359  862172 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/apiserver.key.e26a4211
	I1002 07:13:45.794393  862172 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/proxy-client.key
	I1002 07:13:45.794499  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/813155.pem (1338 bytes)
	W1002 07:13:45.794525  862172 certs.go:480] ignoring /home/jenkins/minikube-integration/21643-811293/.minikube/certs/813155_empty.pem, impossibly tiny 0 bytes
	I1002 07:13:45.794532  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 07:13:45.794558  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/ca.pem (1078 bytes)
	I1002 07:13:45.794576  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/cert.pem (1123 bytes)
	I1002 07:13:45.794596  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/certs/key.pem (1679 bytes)
	I1002 07:13:45.794634  862172 certs.go:484] found cert: /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem (1708 bytes)
	I1002 07:13:45.795320  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 07:13:45.818726  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 07:13:45.837939  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 07:13:45.855497  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 07:13:45.873965  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 07:13:45.893106  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 07:13:45.912548  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 07:13:45.930281  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 07:13:45.947458  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/ssl/certs/8131552.pem --> /usr/share/ca-certificates/8131552.pem (1708 bytes)
	I1002 07:13:45.970451  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 07:13:45.988937  862172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21643-811293/.minikube/certs/813155.pem --> /usr/share/ca-certificates/813155.pem (1338 bytes)
	I1002 07:13:46.009160  862172 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 07:13:46.023113  862172 ssh_runner.go:195] Run: openssl version
	I1002 07:13:46.029819  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8131552.pem && ln -fs /usr/share/ca-certificates/8131552.pem /etc/ssl/certs/8131552.pem"
	I1002 07:13:46.038518  862172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8131552.pem
	I1002 07:13:46.042669  862172 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 07:12 /usr/share/ca-certificates/8131552.pem
	I1002 07:13:46.042737  862172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8131552.pem
	I1002 07:13:46.084384  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8131552.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 07:13:46.092564  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 07:13:46.101206  862172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:13:46.104937  862172 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:13:46.104995  862172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 07:13:46.146161  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 07:13:46.154023  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/813155.pem && ln -fs /usr/share/ca-certificates/813155.pem /etc/ssl/certs/813155.pem"
	I1002 07:13:46.162421  862172 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/813155.pem
	I1002 07:13:46.166214  862172 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 07:12 /usr/share/ca-certificates/813155.pem
	I1002 07:13:46.166282  862172 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/813155.pem
	I1002 07:13:46.207486  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/813155.pem /etc/ssl/certs/51391683.0"
	I1002 07:13:46.215451  862172 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 07:13:46.219209  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 07:13:46.260101  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 07:13:46.300983  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 07:13:46.342139  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 07:13:46.382585  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 07:13:46.423145  862172 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 07:13:46.463859  862172 kubeadm.go:400] StartCluster: {Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:13:46.463944  862172 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1002 07:13:46.464017  862172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:13:46.492948  862172 cri.go:89] found id: "3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f"
	I1002 07:13:46.492959  862172 cri.go:89] found id: "f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686"
	I1002 07:13:46.492963  862172 cri.go:89] found id: "c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c"
	I1002 07:13:46.492966  862172 cri.go:89] found id: "8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955"
	I1002 07:13:46.492969  862172 cri.go:89] found id: "03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405"
	I1002 07:13:46.492971  862172 cri.go:89] found id: "cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41"
	I1002 07:13:46.492974  862172 cri.go:89] found id: "1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd"
	I1002 07:13:46.492976  862172 cri.go:89] found id: "ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd"
	I1002 07:13:46.492979  862172 cri.go:89] found id: ""
	I1002 07:13:46.493030  862172 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1002 07:13:46.520677  862172 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405","pid":1455,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405/rootfs","created":"2025-10-02T07:13:00.102187952Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1317376370f0584960a49fd6cb04f45f"},"owner":"root"},{"ociVersion":"1.2.1","id":"059f411532ccb919c
5415f369baaabda7d12733f8305f2100bba69fd1470856b","pid":2107,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b/rootfs","created":"2025-10-02T07:13:23.546497546Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-prnlg_e558cf31-ab09-4f11-b02b-7193532b2d6a","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-prnlg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e558cf31-ab09-4f11-b
02b-7193532b2d6a"},"owner":"root"},{"ociVersion":"1.2.1","id":"0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","pid":1753,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60/rootfs","created":"2025-10-02T07:13:12.284944156Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-q2985_ec1b36de-bb7c-407f-914c-e1ee91f5371a","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-q2985","io.kubernetes.cri.sandbox-namespace":"kube-
system","io.kubernetes.cri.sandbox-uid":"ec1b36de-bb7c-407f-914c-e1ee91f5371a"},"owner":"root"},{"ociVersion":"1.2.1","id":"1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd","pid":1374,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd/rootfs","created":"2025-10-02T07:12:59.925612135Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","io.kubernetes.cri.sandbox-name":"etcd-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e3ba43411ab897fd13f8222729a6dcf3"},"owner":"root"},{"ociVersion":"1.2.1","id":"29714899aa
30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","pid":1286,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a/rootfs","created":"2025-10-02T07:12:59.769636723Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-630775_1317376370f0584960a49fd6cb04f45f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"131737637
0f0584960a49fd6cb04f45f"},"owner":"root"},{"ociVersion":"1.2.1","id":"300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","pid":2051,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce/rootfs","created":"2025-10-02T07:13:23.476638606Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_098a0d88-d8f7-44bc-9b2b-448769c02475","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":
"kube-system","io.kubernetes.cri.sandbox-uid":"098a0d88-d8f7-44bc-9b2b-448769c02475"},"owner":"root"},{"ociVersion":"1.2.1","id":"3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f","pid":2179,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f/rootfs","created":"2025-10-02T07:13:23.679402548Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-prnlg","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e558cf31-ab09-4f11-b02b-7193532b2d6a"},"owner":"root"},{"ociVersion
":"1.2.1","id":"7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","pid":1265,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad/rootfs","created":"2025-10-02T07:12:59.75205182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-630775_a8e04e9b0f0b64e0f0e12bbc6b34672f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system"
,"io.kubernetes.cri.sandbox-uid":"a8e04e9b0f0b64e0f0e12bbc6b34672f"},"owner":"root"},{"ociVersion":"1.2.1","id":"8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955","pid":1788,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955/rootfs","created":"2025-10-02T07:13:12.45479525Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","io.kubernetes.cri.sandbox-name":"kube-proxy-9nzx4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"061b14c9-b276-4aa3-96f2-b5a112fade93"},"owner":"root"},{"ociVersion":"1.2.1","id":"ad703d64a5ee
86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","pid":1714,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855/rootfs","created":"2025-10-02T07:13:12.240628159Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-9nzx4_061b14c9-b276-4aa3-96f2-b5a112fade93","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-9nzx4","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"061b14c9-b276-4aa3-96f2-b5a112fade93"},"o
wner":"root"},{"ociVersion":"1.2.1","id":"b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","pid":1275,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31/rootfs","created":"2025-10-02T07:12:59.751239332Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-630775_88272a6a98f81ff09ea3b44b1394376a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-630775","io.kubernetes.cri.sandbox-namespace":"kub
e-system","io.kubernetes.cri.sandbox-uid":"88272a6a98f81ff09ea3b44b1394376a"},"owner":"root"},{"ociVersion":"1.2.1","id":"b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","pid":1248,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9/rootfs","created":"2025-10-02T07:12:59.730148156Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-630775_e3ba43411ab897fd13f8222729a6dcf3","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-f
unctional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e3ba43411ab897fd13f8222729a6dcf3"},"owner":"root"},{"ociVersion":"1.2.1","id":"c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c","pid":1811,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c/rootfs","created":"2025-10-02T07:13:12.510926181Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60","io.kubernetes.cri.sandbox-name":"kindnet-q2985","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ec1b36de-bb7c-40
7f-914c-e1ee91f5371a"},"owner":"root"},{"ociVersion":"1.2.1","id":"ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd","pid":1352,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd/rootfs","created":"2025-10-02T07:12:59.898256882Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"88272a6a98f81ff09ea3b44b1394376a"},"owner":"root"},{"ociVersion":"1.2.1","id":"cf05f437077f64106799182946cfcdfa1e3e8
24a91a1380626bc5f83e8fdca41","pid":1433,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41/rootfs","created":"2025-10-02T07:13:00.013519867Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-630775","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a8e04e9b0f0b64e0f0e12bbc6b34672f"},"owner":"root"},{"ociVersion":"1.2.1","id":"f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686","pid":2142,"status":"running","bundle":"/run/con
tainerd/io.containerd.runtime.v2.task/k8s.io/f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686/rootfs","created":"2025-10-02T07:13:23.594203341Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"098a0d88-d8f7-44bc-9b2b-448769c02475"},"owner":"root"}]
	I1002 07:13:46.520993  862172 cri.go:126] list returned 16 containers
	I1002 07:13:46.521001  862172 cri.go:129] container: {ID:03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 Status:running}
	I1002 07:13:46.521019  862172 cri.go:135] skipping {03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 running}: state = "running", want "paused"
	I1002 07:13:46.521030  862172 cri.go:129] container: {ID:059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b Status:running}
	I1002 07:13:46.521036  862172 cri.go:131] skipping 059f411532ccb919c5415f369baaabda7d12733f8305f2100bba69fd1470856b - not in ps
	I1002 07:13:46.521040  862172 cri.go:129] container: {ID:0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60 Status:running}
	I1002 07:13:46.521044  862172 cri.go:131] skipping 0b1223812eb7c0eebfe7389c4ad000729d4976572dbb3509053412cef9f6cf60 - not in ps
	I1002 07:13:46.521047  862172 cri.go:129] container: {ID:1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd Status:running}
	I1002 07:13:46.521052  862172 cri.go:135] skipping {1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd running}: state = "running", want "paused"
	I1002 07:13:46.521057  862172 cri.go:129] container: {ID:29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a Status:running}
	I1002 07:13:46.521062  862172 cri.go:131] skipping 29714899aa30076667aa7f971517af78278bb205b2bc1bc67ae1633808a16e9a - not in ps
	I1002 07:13:46.521065  862172 cri.go:129] container: {ID:300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce Status:running}
	I1002 07:13:46.521069  862172 cri.go:131] skipping 300b99d061daa973c082210214d34e669110cf74243d1851bd3c92d333f8dcce - not in ps
	I1002 07:13:46.521072  862172 cri.go:129] container: {ID:3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f Status:running}
	I1002 07:13:46.521077  862172 cri.go:135] skipping {3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f running}: state = "running", want "paused"
	I1002 07:13:46.521081  862172 cri.go:129] container: {ID:7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad Status:running}
	I1002 07:13:46.521084  862172 cri.go:131] skipping 7c212a34d95f598c096634fa20c48b0e0b2c75c1bd3aa9cad934f80cf91a05ad - not in ps
	I1002 07:13:46.521086  862172 cri.go:129] container: {ID:8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 Status:running}
	I1002 07:13:46.521092  862172 cri.go:135] skipping {8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 running}: state = "running", want "paused"
	I1002 07:13:46.521096  862172 cri.go:129] container: {ID:ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855 Status:running}
	I1002 07:13:46.521101  862172 cri.go:131] skipping ad703d64a5ee86c1e9fd6b9d494f8e45005551777d2991b70c4fcb9cc2dd2855 - not in ps
	I1002 07:13:46.521103  862172 cri.go:129] container: {ID:b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31 Status:running}
	I1002 07:13:46.521108  862172 cri.go:131] skipping b21252a90501fba5f0119185e72b62a712b61880970bb11b09b2689b650f7d31 - not in ps
	I1002 07:13:46.521111  862172 cri.go:129] container: {ID:b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9 Status:running}
	I1002 07:13:46.521115  862172 cri.go:131] skipping b2957b80a9860baaf294b150c07c4d108a9e7a938acd449e3bb93cd47c8f90d9 - not in ps
	I1002 07:13:46.521118  862172 cri.go:129] container: {ID:c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c Status:running}
	I1002 07:13:46.521123  862172 cri.go:135] skipping {c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c running}: state = "running", want "paused"
	I1002 07:13:46.521126  862172 cri.go:129] container: {ID:ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd Status:running}
	I1002 07:13:46.521131  862172 cri.go:135] skipping {ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd running}: state = "running", want "paused"
	I1002 07:13:46.521135  862172 cri.go:129] container: {ID:cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 Status:running}
	I1002 07:13:46.521139  862172 cri.go:135] skipping {cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 running}: state = "running", want "paused"
	I1002 07:13:46.521143  862172 cri.go:129] container: {ID:f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 Status:running}
	I1002 07:13:46.521150  862172 cri.go:135] skipping {f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 running}: state = "running", want "paused"
	I1002 07:13:46.521204  862172 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 07:13:46.529142  862172 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 07:13:46.529151  862172 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 07:13:46.529199  862172 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 07:13:46.536684  862172 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:13:46.537212  862172 kubeconfig.go:125] found "functional-630775" server: "https://192.168.49.2:8441"
	I1002 07:13:46.538573  862172 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 07:13:46.546318  862172 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 07:12:50.564522661 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 07:13:45.619378224 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 07:13:46.546326  862172 kubeadm.go:1160] stopping kube-system containers ...
	I1002 07:13:46.546337  862172 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1002 07:13:46.546389  862172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 07:13:46.576731  862172 cri.go:89] found id: "3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f"
	I1002 07:13:46.576743  862172 cri.go:89] found id: "f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686"
	I1002 07:13:46.576746  862172 cri.go:89] found id: "c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c"
	I1002 07:13:46.576749  862172 cri.go:89] found id: "8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955"
	I1002 07:13:46.576752  862172 cri.go:89] found id: "03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405"
	I1002 07:13:46.576754  862172 cri.go:89] found id: "cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41"
	I1002 07:13:46.576776  862172 cri.go:89] found id: "1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd"
	I1002 07:13:46.576779  862172 cri.go:89] found id: "ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd"
	I1002 07:13:46.576782  862172 cri.go:89] found id: ""
	I1002 07:13:46.576786  862172 cri.go:252] Stopping containers: [3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c 8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd]
	I1002 07:13:46.576842  862172 ssh_runner.go:195] Run: which crictl
	I1002 07:13:46.580717  862172 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c 8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd
	I1002 07:14:09.260854  862172 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686 c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c 8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955 03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405 cf05f437077f64106799182946cfcdfa1e3e824a91a1380626bc5f83e8fdca41 1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd ced51561aa819a23857f08b90e39d5006b21c0032ebbd7177bf0ef602a9c6cbd: (22.680100624s)
	I1002 07:14:09.260914  862172 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 07:14:09.356787  862172 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 07:14:09.364851  862172 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 07:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 07:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  2 07:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  2 07:12 /etc/kubernetes/scheduler.conf
	
	I1002 07:14:09.364926  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 07:14:09.372718  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 07:14:09.380925  862172 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:14:09.380983  862172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 07:14:09.388424  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 07:14:09.396153  862172 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:14:09.396207  862172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 07:14:09.403866  862172 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 07:14:09.411486  862172 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 07:14:09.411541  862172 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 07:14:09.419031  862172 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 07:14:09.426969  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:09.477718  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.144343  862172 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.666597981s)
	I1002 07:14:11.144409  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.378088  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.451044  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:11.531208  862172 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:14:11.531273  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:12.032021  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:12.531343  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:12.553208  862172 api_server.go:72] duration metric: took 1.022009892s to wait for apiserver process to appear ...
	I1002 07:14:12.553222  862172 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:14:12.553240  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:17.553922  862172 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1002 07:14:17.553946  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:17.595327  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 07:14:17.595343  862172 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 07:14:18.054016  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:18.062320  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:14:18.062351  862172 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:14:18.553915  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:18.566171  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1002 07:14:18.566189  862172 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1002 07:14:19.053755  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:19.062039  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 07:14:19.075884  862172 api_server.go:141] control plane version: v1.34.1
	I1002 07:14:19.075901  862172 api_server.go:131] duration metric: took 6.522674439s to wait for apiserver health ...
	I1002 07:14:19.075909  862172 cni.go:84] Creating CNI manager for ""
	I1002 07:14:19.075914  862172 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 07:14:19.079225  862172 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1002 07:14:19.082076  862172 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1002 07:14:19.086181  862172 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1002 07:14:19.086191  862172 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1002 07:14:19.099261  862172 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1002 07:14:19.503118  862172 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:14:19.506726  862172 system_pods.go:59] 8 kube-system pods found
	I1002 07:14:19.506747  862172 system_pods.go:61] "coredns-66bc5c9577-prnlg" [e558cf31-ab09-4f11-b02b-7193532b2d6a] Running
	I1002 07:14:19.506752  862172 system_pods.go:61] "etcd-functional-630775" [54789ff5-5de9-4d26-a2ed-016f2d213969] Running
	I1002 07:14:19.506755  862172 system_pods.go:61] "kindnet-q2985" [ec1b36de-bb7c-407f-914c-e1ee91f5371a] Running
	I1002 07:14:19.506762  862172 system_pods.go:61] "kube-apiserver-functional-630775" [da2e8b84-4a5a-440e-a363-d5e49f3b063d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:14:19.506769  862172 system_pods.go:61] "kube-controller-manager-functional-630775" [7a0b359b-590f-4d60-b4ff-db5abdb995dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:14:19.506774  862172 system_pods.go:61] "kube-proxy-9nzx4" [061b14c9-b276-4aa3-96f2-b5a112fade93] Running
	I1002 07:14:19.506778  862172 system_pods.go:61] "kube-scheduler-functional-630775" [a9da3f11-fb1d-4914-9e27-bc0ed0031000] Running
	I1002 07:14:19.506781  862172 system_pods.go:61] "storage-provisioner" [098a0d88-d8f7-44bc-9b2b-448769c02475] Running
	I1002 07:14:19.506786  862172 system_pods.go:74] duration metric: took 3.658563ms to wait for pod list to return data ...
	I1002 07:14:19.506792  862172 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:14:19.509368  862172 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:14:19.509385  862172 node_conditions.go:123] node cpu capacity is 2
	I1002 07:14:19.509395  862172 node_conditions.go:105] duration metric: took 2.59932ms to run NodePressure ...
	I1002 07:14:19.509455  862172 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 07:14:19.761828  862172 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1002 07:14:19.765664  862172 kubeadm.go:743] kubelet initialised
	I1002 07:14:19.765675  862172 kubeadm.go:744] duration metric: took 3.833797ms waiting for restarted kubelet to initialise ...
	I1002 07:14:19.765688  862172 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 07:14:19.787866  862172 ops.go:34] apiserver oom_adj: -16
	I1002 07:14:19.787878  862172 kubeadm.go:601] duration metric: took 33.258722304s to restartPrimaryControlPlane
	I1002 07:14:19.787885  862172 kubeadm.go:402] duration metric: took 33.324039011s to StartCluster
	I1002 07:14:19.787899  862172 settings.go:142] acquiring lock: {Name:mkfabb257d5e6dc89516b7f3eecfb5ad470245b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:14:19.787965  862172 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:14:19.788635  862172 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/kubeconfig: {Name:mk61b1a16c6c070d43ba1e4fed7f7f8861077db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 07:14:19.788918  862172 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1002 07:14:19.789227  862172 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:14:19.789276  862172 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 07:14:19.789359  862172 addons.go:69] Setting storage-provisioner=true in profile "functional-630775"
	I1002 07:14:19.789366  862172 addons.go:69] Setting default-storageclass=true in profile "functional-630775"
	I1002 07:14:19.789370  862172 addons.go:238] Setting addon storage-provisioner=true in "functional-630775"
	W1002 07:14:19.789376  862172 addons.go:247] addon storage-provisioner should already be in state true
	I1002 07:14:19.789380  862172 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-630775"
	I1002 07:14:19.789394  862172 host.go:66] Checking if "functional-630775" exists ...
	I1002 07:14:19.789709  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:14:19.789928  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:14:19.796485  862172 out.go:179] * Verifying Kubernetes components...
	I1002 07:14:19.799724  862172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 07:14:19.831000  862172 addons.go:238] Setting addon default-storageclass=true in "functional-630775"
	W1002 07:14:19.831011  862172 addons.go:247] addon default-storageclass should already be in state true
	I1002 07:14:19.831032  862172 host.go:66] Checking if "functional-630775" exists ...
	I1002 07:14:19.831436  862172 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
	I1002 07:14:19.834614  862172 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 07:14:19.837838  862172 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:14:19.837854  862172 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 07:14:19.837915  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:14:19.854010  862172 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 07:14:19.854022  862172 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 07:14:19.854080  862172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
	I1002 07:14:19.884793  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:14:19.912028  862172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
	I1002 07:14:20.095555  862172 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 07:14:20.111918  862172 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 07:14:20.116246  862172 node_ready.go:35] waiting up to 6m0s for node "functional-630775" to be "Ready" ...
	I1002 07:14:20.122714  862172 node_ready.go:49] node "functional-630775" is "Ready"
	I1002 07:14:20.122732  862172 node_ready.go:38] duration metric: took 6.465863ms for node "functional-630775" to be "Ready" ...
	I1002 07:14:20.122744  862172 api_server.go:52] waiting for apiserver process to appear ...
	I1002 07:14:20.122800  862172 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:14:20.148198  862172 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 07:14:20.902287  862172 api_server.go:72] duration metric: took 1.113343492s to wait for apiserver process to appear ...
	I1002 07:14:20.902298  862172 api_server.go:88] waiting for apiserver healthz status ...
	I1002 07:14:20.902316  862172 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1002 07:14:20.912126  862172 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1002 07:14:20.913105  862172 api_server.go:141] control plane version: v1.34.1
	I1002 07:14:20.913118  862172 api_server.go:131] duration metric: took 10.81484ms to wait for apiserver health ...
	I1002 07:14:20.913125  862172 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 07:14:20.914304  862172 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1002 07:14:20.916689  862172 system_pods.go:59] 8 kube-system pods found
	I1002 07:14:20.916704  862172 system_pods.go:61] "coredns-66bc5c9577-prnlg" [e558cf31-ab09-4f11-b02b-7193532b2d6a] Running
	I1002 07:14:20.916709  862172 system_pods.go:61] "etcd-functional-630775" [54789ff5-5de9-4d26-a2ed-016f2d213969] Running
	I1002 07:14:20.916713  862172 system_pods.go:61] "kindnet-q2985" [ec1b36de-bb7c-407f-914c-e1ee91f5371a] Running
	I1002 07:14:20.916719  862172 system_pods.go:61] "kube-apiserver-functional-630775" [da2e8b84-4a5a-440e-a363-d5e49f3b063d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:14:20.916725  862172 system_pods.go:61] "kube-controller-manager-functional-630775" [7a0b359b-590f-4d60-b4ff-db5abdb995dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:14:20.916729  862172 system_pods.go:61] "kube-proxy-9nzx4" [061b14c9-b276-4aa3-96f2-b5a112fade93] Running
	I1002 07:14:20.916733  862172 system_pods.go:61] "kube-scheduler-functional-630775" [a9da3f11-fb1d-4914-9e27-bc0ed0031000] Running
	I1002 07:14:20.916736  862172 system_pods.go:61] "storage-provisioner" [098a0d88-d8f7-44bc-9b2b-448769c02475] Running
	I1002 07:14:20.916742  862172 system_pods.go:74] duration metric: took 3.611106ms to wait for pod list to return data ...
	I1002 07:14:20.916749  862172 default_sa.go:34] waiting for default service account to be created ...
	I1002 07:14:20.917317  862172 addons.go:514] duration metric: took 1.128039601s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1002 07:14:20.921851  862172 default_sa.go:45] found service account: "default"
	I1002 07:14:20.921864  862172 default_sa.go:55] duration metric: took 5.110626ms for default service account to be created ...
	I1002 07:14:20.921871  862172 system_pods.go:116] waiting for k8s-apps to be running ...
	I1002 07:14:20.925617  862172 system_pods.go:86] 8 kube-system pods found
	I1002 07:14:20.925632  862172 system_pods.go:89] "coredns-66bc5c9577-prnlg" [e558cf31-ab09-4f11-b02b-7193532b2d6a] Running
	I1002 07:14:20.925638  862172 system_pods.go:89] "etcd-functional-630775" [54789ff5-5de9-4d26-a2ed-016f2d213969] Running
	I1002 07:14:20.925641  862172 system_pods.go:89] "kindnet-q2985" [ec1b36de-bb7c-407f-914c-e1ee91f5371a] Running
	I1002 07:14:20.925648  862172 system_pods.go:89] "kube-apiserver-functional-630775" [da2e8b84-4a5a-440e-a363-d5e49f3b063d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1002 07:14:20.925653  862172 system_pods.go:89] "kube-controller-manager-functional-630775" [7a0b359b-590f-4d60-b4ff-db5abdb995dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1002 07:14:20.925658  862172 system_pods.go:89] "kube-proxy-9nzx4" [061b14c9-b276-4aa3-96f2-b5a112fade93] Running
	I1002 07:14:20.925663  862172 system_pods.go:89] "kube-scheduler-functional-630775" [a9da3f11-fb1d-4914-9e27-bc0ed0031000] Running
	I1002 07:14:20.925666  862172 system_pods.go:89] "storage-provisioner" [098a0d88-d8f7-44bc-9b2b-448769c02475] Running
	I1002 07:14:20.925672  862172 system_pods.go:126] duration metric: took 3.796308ms to wait for k8s-apps to be running ...
	I1002 07:14:20.925678  862172 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 07:14:20.925733  862172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:14:20.948257  862172 system_svc.go:56] duration metric: took 22.570082ms WaitForService to wait for kubelet
	I1002 07:14:20.948274  862172 kubeadm.go:586] duration metric: took 1.159335084s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 07:14:20.948290  862172 node_conditions.go:102] verifying NodePressure condition ...
	I1002 07:14:20.958158  862172 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1002 07:14:20.958173  862172 node_conditions.go:123] node cpu capacity is 2
	I1002 07:14:20.958182  862172 node_conditions.go:105] duration metric: took 9.888033ms to run NodePressure ...
	I1002 07:14:20.958194  862172 start.go:241] waiting for startup goroutines ...
	I1002 07:14:20.958202  862172 start.go:246] waiting for cluster config update ...
	I1002 07:14:20.958211  862172 start.go:255] writing updated cluster config ...
	I1002 07:14:20.958502  862172 ssh_runner.go:195] Run: rm -f paused
	I1002 07:14:20.962320  862172 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:14:20.973046  862172 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-prnlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:20.987079  862172 pod_ready.go:94] pod "coredns-66bc5c9577-prnlg" is "Ready"
	I1002 07:14:20.987104  862172 pod_ready.go:86] duration metric: took 14.025987ms for pod "coredns-66bc5c9577-prnlg" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:20.994730  862172 pod_ready.go:83] waiting for pod "etcd-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:21.004386  862172 pod_ready.go:94] pod "etcd-functional-630775" is "Ready"
	I1002 07:14:21.004403  862172 pod_ready.go:86] duration metric: took 9.659747ms for pod "etcd-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:21.007414  862172 pod_ready.go:83] waiting for pod "kube-apiserver-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:14:23.013583  862172 pod_ready.go:104] pod "kube-apiserver-functional-630775" is not "Ready", error: <nil>
	I1002 07:14:24.512876  862172 pod_ready.go:94] pod "kube-apiserver-functional-630775" is "Ready"
	I1002 07:14:24.512890  862172 pod_ready.go:86] duration metric: took 3.50546221s for pod "kube-apiserver-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:24.515170  862172 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	W1002 07:14:26.520108  862172 pod_ready.go:104] pod "kube-controller-manager-functional-630775" is not "Ready", error: <nil>
	I1002 07:14:27.520641  862172 pod_ready.go:94] pod "kube-controller-manager-functional-630775" is "Ready"
	I1002 07:14:27.520656  862172 pod_ready.go:86] duration metric: took 3.005471687s for pod "kube-controller-manager-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.522900  862172 pod_ready.go:83] waiting for pod "kube-proxy-9nzx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.527167  862172 pod_ready.go:94] pod "kube-proxy-9nzx4" is "Ready"
	I1002 07:14:27.527180  862172 pod_ready.go:86] duration metric: took 4.267526ms for pod "kube-proxy-9nzx4" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.529513  862172 pod_ready.go:83] waiting for pod "kube-scheduler-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.766495  862172 pod_ready.go:94] pod "kube-scheduler-functional-630775" is "Ready"
	I1002 07:14:27.766510  862172 pod_ready.go:86] duration metric: took 236.984661ms for pod "kube-scheduler-functional-630775" in "kube-system" namespace to be "Ready" or be gone ...
	I1002 07:14:27.766520  862172 pod_ready.go:40] duration metric: took 6.804180892s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1002 07:14:27.820304  862172 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1002 07:14:27.823366  862172 out.go:179] * Done! kubectl is now configured to use "functional-630775" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f717db2706050       1611cd07b61d5       4 minutes ago       Exited              mount-munger              0                   a0764ff8470de       busybox-mount                               default
	9c9c69562b8f2       35f3cbee4fb77       4 minutes ago       Running             nginx                     0                   e8dc22f25086e       nginx-svc                                   default
	6ad64793002dd       43911e833d64d       4 minutes ago       Running             kube-apiserver            0                   3762e74a92c10       kube-apiserver-functional-630775            kube-system
	178c96249f9c1       7eb2c6ff0c5a7       4 minutes ago       Running             kube-controller-manager   2                   7c212a34d95f5       kube-controller-manager-functional-630775   kube-system
	602bfadd7763e       a1894772a478e       4 minutes ago       Running             etcd                      1                   b2957b80a9860       etcd-functional-630775                      kube-system
	004330dfe2dd2       b5f57ec6b9867       5 minutes ago       Running             kube-scheduler            1                   29714899aa300       kube-scheduler-functional-630775            kube-system
	8ba344dad1233       7eb2c6ff0c5a7       5 minutes ago       Exited              kube-controller-manager   1                   7c212a34d95f5       kube-controller-manager-functional-630775   kube-system
	49a7d1907be92       ba04bb24b9575       5 minutes ago       Running             storage-provisioner       1                   300b99d061daa       storage-provisioner                         kube-system
	89b98ca1639a4       05baa95f5142d       5 minutes ago       Running             kube-proxy                1                   ad703d64a5ee8       kube-proxy-9nzx4                            kube-system
	8dcd88165ca08       b1a8c6f707935       5 minutes ago       Running             kindnet-cni               1                   0b1223812eb7c       kindnet-q2985                               kube-system
	9d5e48870ba26       138784d87c9c5       5 minutes ago       Running             coredns                   1                   059f411532ccb       coredns-66bc5c9577-prnlg                    kube-system
	3fbb4a2c8d8bc       138784d87c9c5       5 minutes ago       Exited              coredns                   0                   059f411532ccb       coredns-66bc5c9577-prnlg                    kube-system
	f53adf4e4cfba       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner       0                   300b99d061daa       storage-provisioner                         kube-system
	c55ee29bbc732       b1a8c6f707935       5 minutes ago       Exited              kindnet-cni               0                   0b1223812eb7c       kindnet-q2985                               kube-system
	8ca7e16833e24       05baa95f5142d       5 minutes ago       Exited              kube-proxy                0                   ad703d64a5ee8       kube-proxy-9nzx4                            kube-system
	03ee97847e6c7       b5f57ec6b9867       5 minutes ago       Exited              kube-scheduler            0                   29714899aa300       kube-scheduler-functional-630775            kube-system
	1bd4dd24c2653       a1894772a478e       5 minutes ago       Exited              etcd                      0                   b2957b80a9860       etcd-functional-630775                      kube-system
	
	
	==> containerd <==
	Oct 02 07:15:39 functional-630775 containerd[3593]: time="2025-10-02T07:15:39.579684373Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:15:39 functional-630775 containerd[3593]: time="2025-10-02T07:15:39.582009252Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:15:39 functional-630775 containerd[3593]: time="2025-10-02T07:15:39.734346471Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:15:40 functional-630775 containerd[3593]: time="2025-10-02T07:15:40.028051292Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:15:40 functional-630775 containerd[3593]: time="2025-10-02T07:15:40.028167540Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 02 07:16:15 functional-630775 containerd[3593]: time="2025-10-02T07:16:15.579719466Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 02 07:16:15 functional-630775 containerd[3593]: time="2025-10-02T07:16:15.582626838Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:16:15 functional-630775 containerd[3593]: time="2025-10-02T07:16:15.727555911Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:16:16 functional-630775 containerd[3593]: time="2025-10-02T07:16:16.124382310Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:16:16 functional-630775 containerd[3593]: time="2025-10-02T07:16:16.124497303Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21215"
	Oct 02 07:16:31 functional-630775 containerd[3593]: time="2025-10-02T07:16:31.579877075Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:16:31 functional-630775 containerd[3593]: time="2025-10-02T07:16:31.582253874Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:16:31 functional-630775 containerd[3593]: time="2025-10-02T07:16:31.714073672Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:16:32 functional-630775 containerd[3593]: time="2025-10-02T07:16:32.011779655Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:16:32 functional-630775 containerd[3593]: time="2025-10-02T07:16:32.011822288Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 02 07:17:42 functional-630775 containerd[3593]: time="2025-10-02T07:17:42.579026425Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 02 07:17:42 functional-630775 containerd[3593]: time="2025-10-02T07:17:42.581262853Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:17:42 functional-630775 containerd[3593]: time="2025-10-02T07:17:42.719408579Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:17:42 functional-630775 containerd[3593]: time="2025-10-02T07:17:42.998496274Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:17:42 functional-630775 containerd[3593]: time="2025-10-02T07:17:42.998600681Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 02 07:17:56 functional-630775 containerd[3593]: time="2025-10-02T07:17:56.579829595Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 02 07:17:56 functional-630775 containerd[3593]: time="2025-10-02T07:17:56.582641229Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:17:56 functional-630775 containerd[3593]: time="2025-10-02T07:17:56.734411112Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 02 07:17:57 functional-630775 containerd[3593]: time="2025-10-02T07:17:57.145300276Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 02 07:17:57 functional-630775 containerd[3593]: time="2025-10-02T07:17:57.145351910Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=11739"
	
	
	==> coredns [3fbb4a2c8d8bc6f8342a03befca6ead2adb2040e6f7235c0c27b00f3ee6e7f9f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38638 - 28631 "HINFO IN 8890447447211847089.1590523317042042169. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012992148s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9d5e48870ba26acf37929a5697515a9c28c95aa154630492e8a65ff7db1cbe96] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56466 - 63142 "HINFO IN 2250469666875045467.358806669876498839. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016943821s
	
	
	==> describe nodes <==
	Name:               functional-630775
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-630775
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e67b65c0f4e92b22cf6bb9baed3c99d519c7afdb
	                    minikube.k8s.io/name=functional-630775
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_02T07_13_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 02 Oct 2025 07:13:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-630775
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 02 Oct 2025 07:18:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 02 Oct 2025 07:15:19 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 02 Oct 2025 07:15:19 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 02 Oct 2025 07:15:19 +0000   Thu, 02 Oct 2025 07:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 02 Oct 2025 07:15:19 +0000   Thu, 02 Oct 2025 07:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-630775
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 cadd65090dff457dbb73450103633ff2
	  System UUID:                6a9d513c-1640-40d4-8a86-98c871c3750d
	  Boot ID:                    7d897d56-c217-4cfc-926c-91f9be002777
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-7d85dfc575-xzj2s          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-prnlg                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m45s
	  kube-system                 etcd-functional-630775                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m50s
	  kube-system                 kindnet-q2985                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m45s
	  kube-system                 kube-apiserver-functional-630775             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-functional-630775    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-proxy-9nzx4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-scheduler-functional-630775             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m43s                  kube-proxy       
	  Normal   Starting                 5m                     kube-proxy       
	  Normal   NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    5m50s                  kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 5m50s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m50s                  kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m50s                  kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m50s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m46s                  node-controller  Node functional-630775 event: Registered Node functional-630775 in Controller
	  Normal   NodeReady                5m33s                  kubelet          Node functional-630775 status is now: NodeReady
	  Normal   Starting                 4m45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m45s (x8 over 4m45s)  kubelet          Node functional-630775 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m45s (x8 over 4m45s)  kubelet          Node functional-630775 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node functional-630775 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m35s                  node-controller  Node functional-630775 event: Registered Node functional-630775 in Controller
	
	
	==> dmesg <==
	[Oct 2 05:10] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 2 06:18] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 2 06:35] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [1bd4dd24c2653cb5d20f80d79b8f3038fcac06686187e5fb9c12c8e9e8d1bbfd] <==
	{"level":"warn","ts":"2025-10-02T07:13:02.525498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.545863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.572157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.598265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.615644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.641510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:13:02.739888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36202","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-02T07:13:52.114867Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-02T07:13:52.114936Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-630775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-02T07:13:52.115053Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:59.121770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-02T07:13:59.123526Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.123603Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-02T07:13:59.123856Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-02T07:13:59.123880Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124602Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124651Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:59.124661Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124700Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-02T07:13:59.124720Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-02T07:13:59.124727Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.127456Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-02T07:13:59.127532Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-02T07:13:59.127611Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-02T07:13:59.127622Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-630775","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [602bfadd7763eb054613d766eab0c38eff37bc1a71150682c8892da1032e031a] <==
	{"level":"warn","ts":"2025-10-02T07:14:16.467076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.487393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.505668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.516995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.533988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.557989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.567822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.580883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.596396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.612225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.626910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.645890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.661313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.676336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.692121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.707254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.724915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.741439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.757272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.777617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.788071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.820394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.836544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.851858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-02T07:14:16.929953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41430","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 07:18:56 up  7:01,  0 user,  load average: 1.05, 0.89, 0.97
	Linux functional-630775 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8dcd88165ca08da5e62e74301de3c24c91e43ad60914ede11e5bfc04c0dcfff6] <==
	I1002 07:16:53.249871       1 main.go:301] handling current node
	I1002 07:17:03.245948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:17:03.245989       1 main.go:301] handling current node
	I1002 07:17:13.243886       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:17:13.243923       1 main.go:301] handling current node
	I1002 07:17:23.249990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:17:23.250026       1 main.go:301] handling current node
	I1002 07:17:33.241983       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:17:33.242022       1 main.go:301] handling current node
	I1002 07:17:43.241575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:17:43.241610       1 main.go:301] handling current node
	I1002 07:17:53.249154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:17:53.249257       1 main.go:301] handling current node
	I1002 07:18:03.244287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:18:03.244386       1 main.go:301] handling current node
	I1002 07:18:13.241750       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:18:13.241788       1 main.go:301] handling current node
	I1002 07:18:23.251147       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:18:23.251181       1 main.go:301] handling current node
	I1002 07:18:33.243018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:18:33.243066       1 main.go:301] handling current node
	I1002 07:18:43.241378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:18:43.241433       1 main.go:301] handling current node
	I1002 07:18:53.249846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:18:53.249886       1 main.go:301] handling current node
	
	
	==> kindnet [c55ee29bbc7324ba8d557383e7c85bc2cbc5e081bedb55efaa1ef005ed54df4c] <==
	I1002 07:13:12.711039       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1002 07:13:12.711295       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1002 07:13:12.711458       1 main.go:148] setting mtu 1500 for CNI 
	I1002 07:13:12.711478       1 main.go:178] kindnetd IP family: "ipv4"
	I1002 07:13:12.711488       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-02T07:13:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1002 07:13:12.915994       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1002 07:13:12.916209       1 controller.go:381] "Waiting for informer caches to sync"
	I1002 07:13:12.916310       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1002 07:13:12.919193       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1002 07:13:13.119841       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1002 07:13:13.119865       1 metrics.go:72] Registering metrics
	I1002 07:13:13.208874       1 controller.go:711] "Syncing nftables rules"
	I1002 07:13:22.922829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:22.922892       1 main.go:301] handling current node
	I1002 07:13:32.922862       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:32.922901       1 main.go:301] handling current node
	I1002 07:13:42.916843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1002 07:13:42.916881       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ad64793002dd63e29f2e6d0c903589a03b0c6e995ae310ae36b85d4ee81c65b] <==
	I1002 07:14:17.694182       1 autoregister_controller.go:144] Starting autoregister controller
	I1002 07:14:17.694225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1002 07:14:17.694281       1 cache.go:39] Caches are synced for autoregister controller
	I1002 07:14:17.735482       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1002 07:14:17.762063       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1002 07:14:17.763062       1 policy_source.go:240] refreshing policies
	I1002 07:14:17.763376       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1002 07:14:17.763564       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1002 07:14:17.763609       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1002 07:14:17.763777       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1002 07:14:17.809707       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 07:14:17.822548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1002 07:14:18.445605       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1002 07:14:18.560451       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1002 07:14:18.773767       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1002 07:14:18.775445       1 controller.go:667] quota admission added evaluator for: endpoints
	I1002 07:14:18.783231       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1002 07:14:19.495814       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1002 07:14:19.631063       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1002 07:14:19.697525       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 07:14:19.704659       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1002 07:14:21.128106       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1002 07:14:31.174050       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.102.190"}
	I1002 07:14:37.116336       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.131.33"}
	I1002 07:14:58.634088       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.89.202"}
	
	
	==> kube-controller-manager [178c96249f9c11e548d1469eeefcd5de32442210f1af35a1b1c70bbcbb5caee9] <==
	I1002 07:14:21.130160       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1002 07:14:21.132258       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1002 07:14:21.138828       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1002 07:14:21.141423       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1002 07:14:21.145794       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1002 07:14:21.149245       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1002 07:14:21.150621       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1002 07:14:21.150784       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1002 07:14:21.150890       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1002 07:14:21.150986       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1002 07:14:21.151070       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1002 07:14:21.153607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1002 07:14:21.160880       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1002 07:14:21.166718       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1002 07:14:21.166892       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1002 07:14:21.166733       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:14:21.167035       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-630775"
	I1002 07:14:21.167125       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1002 07:14:21.167131       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1002 07:14:21.167498       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1002 07:14:21.170850       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1002 07:14:21.173344       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1002 07:14:21.177670       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1002 07:14:21.191011       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1002 07:14:21.192087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [8ba344dad123377a72939202d6efa440d87b7663e4bd64b2dadc679537027ddf] <==
	I1002 07:14:02.006777       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kubelet-client"
	I1002 07:14:02.006886       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.007514       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1002 07:14:02.007744       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-kube-apiserver-client"
	I1002 07:14:02.007890       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.008312       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1002 07:14:02.008349       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="service-lb-controller"
	I1002 07:14:02.008672       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1002 07:14:02.008694       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrsigning-legacy-unknown"
	I1002 07:14:02.008756       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1002 07:14:02.015394       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1002 07:14:02.015532       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1002 07:14:02.015851       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1002 07:14:02.015925       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1002 07:14:02.042270       1 controllermanager.go:781] "Started controller" controller="daemonset-controller"
	I1002 07:14:02.042638       1 daemon_controller.go:310] "Starting daemon sets controller" logger="daemonset-controller"
	I1002 07:14:02.042661       1 shared_informer.go:349] "Waiting for caches to sync" controller="daemon sets"
	I1002 07:14:02.069196       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1002 07:14:02.069427       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1002 07:14:02.069480       1 shared_informer.go:349] "Waiting for caches to sync" controller="certificate-csrapproving"
	I1002 07:14:02.089131       1 controllermanager.go:781] "Started controller" controller="token-cleaner-controller"
	I1002 07:14:02.089615       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1002 07:14:02.089751       1 shared_informer.go:349] "Waiting for caches to sync" controller="token_cleaner"
	I1002 07:14:02.089871       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	F1002 07:14:03.126256       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/persistent-volume-binder": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [89b98ca1639a412e0dbaa8f47354c23f8c3711eaae363a9da73be6b9e81e25f3] <==
	I1002 07:13:53.078292       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:13:55.525782       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:55.699510       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:55.699620       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:13:55.699748       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:55.732249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:13:55.732366       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:55.736441       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:55.737057       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:55.737117       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:55.738312       1 config.go:200] "Starting service config controller"
	I1002 07:13:55.738373       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:55.738414       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:55.738445       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:55.738489       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:55.738518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:55.742586       1 config.go:309] "Starting node config controller"
	I1002 07:13:55.742643       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:55.742670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:55.838897       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:55.839113       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1002 07:13:55.839128       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [8ca7e16833e241e728354165abbbe9bb519fff42043d47e6c2669e84e8c9f955] <==
	I1002 07:13:12.526794       1 server_linux.go:53] "Using iptables proxy"
	I1002 07:13:12.615718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1002 07:13:12.716596       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1002 07:13:12.716633       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1002 07:13:12.716937       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1002 07:13:12.738780       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1002 07:13:12.738843       1 server_linux.go:132] "Using iptables Proxier"
	I1002 07:13:12.742821       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1002 07:13:12.743324       1 server.go:527] "Version info" version="v1.34.1"
	I1002 07:13:12.743349       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1002 07:13:12.744994       1 config.go:200] "Starting service config controller"
	I1002 07:13:12.745017       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1002 07:13:12.745035       1 config.go:106] "Starting endpoint slice config controller"
	I1002 07:13:12.745039       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1002 07:13:12.745050       1 config.go:403] "Starting serviceCIDR config controller"
	I1002 07:13:12.745054       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1002 07:13:12.748972       1 config.go:309] "Starting node config controller"
	I1002 07:13:12.748999       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1002 07:13:12.749008       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1002 07:13:12.845522       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1002 07:13:12.845564       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1002 07:13:12.845753       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [004330dfe2dd26e68f7ba578cc7ac15e5d034dcb6e6707f60a375272ad35f422] <==
	I1002 07:14:01.072468       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1002 07:14:01.072514       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:14:01.075068       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:14:01.072527       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.080823       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.176945       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1002 07:14:01.182234       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1002 07:14:01.185613       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1002 07:14:17.541784       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:14:17.542079       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:14:17.542215       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:14:17.542331       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:14:17.542498       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:14:17.542694       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:14:17.542845       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:14:17.542983       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:14:17.543100       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:14:17.543254       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1002 07:14:17.543414       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:14:17.543555       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:14:17.543654       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:14:17.543823       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:14:17.594447       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:14:17.603388       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1002 07:14:17.603430       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	
	
	==> kube-scheduler [03ee97847e6c78a9542037f4b695c2e2f64f3b55ecb2361274eb2e40e3753405] <==
	E1002 07:13:04.274361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1002 07:13:04.275625       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1002 07:13:04.275830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1002 07:13:04.284646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1002 07:13:04.285040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1002 07:13:04.285087       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1002 07:13:04.285136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1002 07:13:04.285179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1002 07:13:04.285226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1002 07:13:04.285263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1002 07:13:04.285411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1002 07:13:04.285932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1002 07:13:04.285985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1002 07:13:04.286030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1002 07:13:04.286161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1002 07:13:04.286209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1002 07:13:04.286278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1002 07:13:04.287852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1002 07:13:05.566193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:51.964488       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1002 07:13:51.964595       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1002 07:13:51.964607       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1002 07:13:51.964625       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1002 07:13:51.965728       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1002 07:13:51.965750       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 02 07:16:53 functional-630775 kubelet[4707]: E1002 07:16:53.578788    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:17:02 functional-630775 kubelet[4707]: E1002 07:17:02.578318    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:17:08 functional-630775 kubelet[4707]: E1002 07:17:08.579006    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:17:15 functional-630775 kubelet[4707]: E1002 07:17:15.579100    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:17:22 functional-630775 kubelet[4707]: E1002 07:17:22.579169    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:17:28 functional-630775 kubelet[4707]: E1002 07:17:28.579093    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:17:33 functional-630775 kubelet[4707]: E1002 07:17:33.579031    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:17:42 functional-630775 kubelet[4707]: E1002 07:17:42.998947    4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 07:17:42 functional-630775 kubelet[4707]: E1002 07:17:42.999011    4707 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 02 07:17:42 functional-630775 kubelet[4707]: E1002 07:17:42.999095    4707 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(09853e6d-5c6c-4130-ae9a-981e745f8548): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:17:42 functional-630775 kubelet[4707]: E1002 07:17:42.999133    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:17:45 functional-630775 kubelet[4707]: E1002 07:17:45.579439    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:17:54 functional-630775 kubelet[4707]: E1002 07:17:54.579018    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:17:57 functional-630775 kubelet[4707]: E1002 07:17:57.145773    4707 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:17:57 functional-630775 kubelet[4707]: E1002 07:17:57.145849    4707 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 02 07:17:57 functional-630775 kubelet[4707]: E1002 07:17:57.145934    4707 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-xzj2s_default(ec49cc62-edb5-44f4-8182-2f3ecfd5a092): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 02 07:17:57 functional-630775 kubelet[4707]: E1002 07:17:57.145973    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:18:07 functional-630775 kubelet[4707]: E1002 07:18:07.579317    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:18:12 functional-630775 kubelet[4707]: E1002 07:18:12.578952    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:18:22 functional-630775 kubelet[4707]: E1002 07:18:22.578731    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:18:25 functional-630775 kubelet[4707]: E1002 07:18:25.579070    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:18:33 functional-630775 kubelet[4707]: E1002 07:18:33.578994    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:18:40 functional-630775 kubelet[4707]: E1002 07:18:40.578972    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	Oct 02 07:18:44 functional-630775 kubelet[4707]: E1002 07:18:44.578379    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="09853e6d-5c6c-4130-ae9a-981e745f8548"
	Oct 02 07:18:55 functional-630775 kubelet[4707]: E1002 07:18:55.578938    4707 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-xzj2s" podUID="ec49cc62-edb5-44f4-8182-2f3ecfd5a092"
	
	
	==> storage-provisioner [49a7d1907be92f523c680b46b2703bf050574d70e75ee55cd4658f2b84a344da] <==
	W1002 07:18:32.183782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:34.187051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:34.193655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:36.198392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:36.203185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:38.206263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:38.210646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:40.214215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:40.220991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:42.225754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:42.249776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:44.252462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:44.257094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:46.260709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:46.264899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:48.267922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:48.275293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:50.278710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:50.283238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:52.286585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:52.290933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:54.293936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:54.299049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:56.309007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:18:56.366593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f53adf4e4cfbad131e10f66b3dae1c825a3471c1faea3664856f8e62f2a7e686] <==
	W1002 07:13:25.679197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:27.682714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:27.690039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:29.693880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:29.702886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:31.707705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:31.715973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:33.719455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:33.725416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:35.728350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:35.735553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:37.738556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:37.744692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:39.747697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:39.754894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:41.760064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:41.766492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:43.770083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:43.774532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:45.779482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:45.786182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:47.789951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:47.794536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:49.797953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1002 07:13:49.805383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-630775 -n functional-630775
helpers_test.go:269: (dbg) Run:  kubectl --context functional-630775 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-connect-7d85dfc575-xzj2s sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-630775 describe pod busybox-mount hello-node-connect-7d85dfc575-xzj2s sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-630775 describe pod busybox-mount hello-node-connect-7d85dfc575-xzj2s sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:48 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  containerd://f717db27060500b345df2438da1670b15506f88784670bcd5f0a53a4bae5e82c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 02 Oct 2025 07:14:51 +0000
	      Finished:     Thu, 02 Oct 2025 07:14:51 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m6f8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-m6f8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4m8s  default-scheduler  Successfully assigned default/busybox-mount to functional-630775
	  Normal  Pulling    4m8s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m6s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.025s (2.025s including waiting). Image size: 1935750 bytes.
	  Normal  Created    4m6s  kubelet            Created container: mount-munger
	  Normal  Started    4m6s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-connect-7d85dfc575-xzj2s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:58 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bqvgd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-bqvgd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  3m58s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzj2s to functional-630775
	  Warning  Failed     2m25s (x4 over 3m58s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    61s (x5 over 3m59s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     60s (x5 over 3m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     60s                    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2s (x15 over 3m58s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x15 over 3m58s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-630775/192.168.49.2
	Start Time:       Thu, 02 Oct 2025 07:14:54 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcjbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-bcjbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-630775
	  Warning  Failed     2m41s                kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    75s (x5 over 4m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     75s (x4 over 4m2s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     75s (x5 over 4m2s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    13s (x15 over 4m2s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     13s (x15 over 4m2s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (249.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-630775 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-630775 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-sd479" [7122762e-93c5-4612-aecc-f4ad583b342c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1002 07:23:24.880923  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-630775 -n functional-630775
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-02 07:28:58.053404936 +0000 UTC m=+3168.940752256
functional_test.go:1460: (dbg) Run:  kubectl --context functional-630775 describe po hello-node-75c85bcc94-sd479 -n default
functional_test.go:1460: (dbg) kubectl --context functional-630775 describe po hello-node-75c85bcc94-sd479 -n default:
Name:             hello-node-75c85bcc94-sd479
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-630775/192.168.49.2
Start Time:       Thu, 02 Oct 2025 07:18:57 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tbkb6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tbkb6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sd479 to functional-630775
Warning  Failed     8m26s (x3 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     7m4s (x2 over 9m18s)    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m59s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-630775 logs hello-node-75c85bcc94-sd479 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-630775 logs hello-node-75c85bcc94-sd479 -n default: exit status 1 (97.321115ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-sd479" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-630775 logs hello-node-75c85bcc94-sd479 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 service --namespace=default --https --url hello-node: exit status 115 (402.951966ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30982
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-630775 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 service hello-node --url --format={{.IP}}: exit status 115 (416.593945ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-630775 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 service hello-node --url: exit status 115 (429.18787ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30982
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-630775 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30982
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    

Test pass (290/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.39
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.02
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.15
18 TestDownloadOnly/v1.34.1/DeleteAll 0.32
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.26
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 423.34
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.91
35 TestAddons/parallel/Registry 16.87
36 TestAddons/parallel/RegistryCreds 0.76
38 TestAddons/parallel/InspektorGadget 5.32
39 TestAddons/parallel/MetricsServer 6
42 TestAddons/parallel/Headlamp 37.86
43 TestAddons/parallel/CloudSpanner 6.62
45 TestAddons/parallel/NvidiaDevicePlugin 5.65
46 TestAddons/parallel/Yakd 11.85
48 TestAddons/StoppedEnableDisable 12.33
49 TestCertOptions 43.75
50 TestCertExpiration 239.61
52 TestForceSystemdFlag 36.56
53 TestForceSystemdEnv 46.55
59 TestErrorSpam/setup 31.64
60 TestErrorSpam/start 0.79
61 TestErrorSpam/status 1.12
62 TestErrorSpam/pause 1.65
63 TestErrorSpam/unpause 1.85
64 TestErrorSpam/stop 1.45
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 49.64
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 7.05
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
76 TestFunctional/serial/CacheCmd/cache/add_local 1.26
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 46.21
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.5
88 TestFunctional/serial/InvalidService 4.78
90 TestFunctional/parallel/ConfigCmd 0.4
92 TestFunctional/parallel/DryRun 0.45
93 TestFunctional/parallel/InternationalLanguage 0.22
94 TestFunctional/parallel/StatusCmd 1.02
99 TestFunctional/parallel/AddonsCmd 0.14
102 TestFunctional/parallel/SSHCmd 0.7
103 TestFunctional/parallel/CpCmd 2.02
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 1.87
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.97
114 TestFunctional/parallel/License 0.33
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.64
117 TestFunctional/parallel/Version/short 0.08
118 TestFunctional/parallel/Version/components 1.16
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.55
127 TestFunctional/parallel/ImageCommands/Setup 0.72
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.17
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.33
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.15
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
144 TestFunctional/parallel/MountCmd/any-port 8.43
145 TestFunctional/parallel/MountCmd/specific-port 2.13
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
149 TestFunctional/parallel/ProfileCmd/profile_list 0.43
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
151 TestFunctional/parallel/ServiceCmd/List 1.3
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.3
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 170.18
164 TestMultiControlPlane/serial/DeployApp 8.27
165 TestMultiControlPlane/serial/PingHostFromPods 1.57
166 TestMultiControlPlane/serial/AddWorkerNode 60.74
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
169 TestMultiControlPlane/serial/CopyFile 19.56
170 TestMultiControlPlane/serial/StopSecondaryNode 12.78
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.02
172 TestMultiControlPlane/serial/RestartSecondaryNode 13.28
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.01
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.09
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.38
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.86
177 TestMultiControlPlane/serial/StopCluster 35.68
178 TestMultiControlPlane/serial/RestartCluster 67.64
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
180 TestMultiControlPlane/serial/AddSecondaryNode 59.82
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.06
185 TestJSONOutput/start/Command 80.5
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.72
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.8
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 38.64
211 TestKicCustomNetwork/use_default_bridge_network 41.7
212 TestKicExistingNetwork 33.23
213 TestKicCustomSubnet 33.36
214 TestKicStaticIP 35.69
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 74.43
219 TestMountStart/serial/StartWithMountFirst 8.82
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 6.6
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.61
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.22
226 TestMountStart/serial/RestartStopped 7.65
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 109.63
231 TestMultiNode/serial/DeployApp2Nodes 5.24
232 TestMultiNode/serial/PingHostFrom2Pods 0.95
233 TestMultiNode/serial/AddNode 28.38
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.69
236 TestMultiNode/serial/CopyFile 10.07
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 8.13
239 TestMultiNode/serial/RestartKeepsNodes 73.04
240 TestMultiNode/serial/DeleteNode 5.58
241 TestMultiNode/serial/StopMultiNode 23.87
242 TestMultiNode/serial/RestartMultiNode 52.38
243 TestMultiNode/serial/ValidateNameConflict 37.6
248 TestPreload 124.64
250 TestScheduledStopUnix 109.17
253 TestInsufficientStorage 12.1
254 TestRunningBinaryUpgrade 66.55
256 TestKubernetesUpgrade 355.65
257 TestMissingContainerUpgrade 143.34
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 47.44
261 TestNoKubernetes/serial/StartWithStopK8s 18.24
262 TestNoKubernetes/serial/Start 8.37
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 0.67
265 TestNoKubernetes/serial/Stop 1.21
266 TestNoKubernetes/serial/StartNoArgs 6.86
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.53
268 TestStoppedBinaryUpgrade/Setup 0.71
269 TestStoppedBinaryUpgrade/Upgrade 56.95
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.57
279 TestPause/serial/Start 49.11
280 TestPause/serial/SecondStartNoReconfiguration 6.5
281 TestPause/serial/Pause 0.71
282 TestPause/serial/VerifyStatus 0.31
283 TestPause/serial/Unpause 0.76
284 TestPause/serial/PauseAgain 0.96
285 TestPause/serial/DeletePaused 2.83
286 TestPause/serial/VerifyDeletedResources 0.4
294 TestNetworkPlugins/group/false 3.79
299 TestStartStop/group/old-k8s-version/serial/FirstStart 60.68
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.48
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
302 TestStartStop/group/old-k8s-version/serial/Stop 11.94
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/old-k8s-version/serial/SecondStart 54.13
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
308 TestStartStop/group/old-k8s-version/serial/Pause 3.25
310 TestStartStop/group/embed-certs/serial/FirstStart 88.13
312 TestStartStop/group/no-preload/serial/FirstStart 67.19
313 TestStartStop/group/embed-certs/serial/DeployApp 10.35
314 TestStartStop/group/no-preload/serial/DeployApp 9.34
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
316 TestStartStop/group/embed-certs/serial/Stop 12
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.01
318 TestStartStop/group/no-preload/serial/Stop 11.96
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
320 TestStartStop/group/embed-certs/serial/SecondStart 64.16
321 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
322 TestStartStop/group/no-preload/serial/SecondStart 56.2
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.2
327 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
328 TestStartStop/group/no-preload/serial/Pause 3.42
329 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
330 TestStartStop/group/embed-certs/serial/Pause 4.79
332 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.9
334 TestStartStop/group/newest-cni/serial/FirstStart 46.33
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
337 TestStartStop/group/newest-cni/serial/Stop 1.24
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 16.75
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
343 TestStartStop/group/newest-cni/serial/Pause 2.85
344 TestNetworkPlugins/group/auto/Start 86.32
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.43
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.35
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.28
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.21
350 TestNetworkPlugins/group/auto/KubeletFlags 0.32
351 TestNetworkPlugins/group/auto/NetCatPod 9.28
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestNetworkPlugins/group/auto/DNS 0.22
354 TestNetworkPlugins/group/auto/Localhost 0.18
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.27
359 TestNetworkPlugins/group/flannel/Start 70.01
360 TestNetworkPlugins/group/calico/Start 58.16
361 TestNetworkPlugins/group/calico/ControllerPod 6
362 TestNetworkPlugins/group/flannel/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.31
364 TestNetworkPlugins/group/calico/NetCatPod 10.26
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
366 TestNetworkPlugins/group/flannel/NetCatPod 9.33
367 TestNetworkPlugins/group/calico/DNS 0.19
368 TestNetworkPlugins/group/calico/Localhost 0.21
369 TestNetworkPlugins/group/calico/HairPin 0.26
370 TestNetworkPlugins/group/flannel/DNS 0.21
371 TestNetworkPlugins/group/flannel/Localhost 0.15
372 TestNetworkPlugins/group/flannel/HairPin 0.17
373 TestNetworkPlugins/group/custom-flannel/Start 63.23
374 TestNetworkPlugins/group/kindnet/Start 85.37
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.27
377 TestNetworkPlugins/group/custom-flannel/DNS 0.17
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
380 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
381 TestNetworkPlugins/group/bridge/Start 54.12
382 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
383 TestNetworkPlugins/group/kindnet/NetCatPod 10.34
384 TestNetworkPlugins/group/kindnet/DNS 0.23
385 TestNetworkPlugins/group/kindnet/Localhost 0.2
386 TestNetworkPlugins/group/kindnet/HairPin 0.23
387 TestNetworkPlugins/group/enable-default-cni/Start 50.16
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
389 TestNetworkPlugins/group/bridge/NetCatPod 10.34
390 TestNetworkPlugins/group/bridge/DNS 0.21
391 TestNetworkPlugins/group/bridge/Localhost 0.19
392 TestNetworkPlugins/group/bridge/HairPin 0.15
393 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
394 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
395 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
396 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
397 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (5.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-492765 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-492765 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.384776795s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 06:36:14.539739  813155 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1002 06:36:14.539826  813155 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-492765
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-492765: exit status 85 (80.738114ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-492765 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-492765 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:36:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:36:09.199378  813160 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:36:09.199541  813160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:09.199553  813160 out.go:374] Setting ErrFile to fd 2...
	I1002 06:36:09.199558  813160 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:09.199823  813160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	W1002 06:36:09.199959  813160 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21643-811293/.minikube/config/config.json: open /home/jenkins/minikube-integration/21643-811293/.minikube/config/config.json: no such file or directory
	I1002 06:36:09.200344  813160 out.go:368] Setting JSON to true
	I1002 06:36:09.201221  813160 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22719,"bootTime":1759364251,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 06:36:09.201290  813160 start.go:140] virtualization:  
	I1002 06:36:09.205321  813160 out.go:99] [download-only-492765] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1002 06:36:09.205485  813160 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 06:36:09.205554  813160 notify.go:220] Checking for updates...
	I1002 06:36:09.208299  813160 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:36:09.211336  813160 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:36:09.214503  813160 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:36:09.217329  813160 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 06:36:09.220246  813160 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 06:36:09.225770  813160 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:36:09.226060  813160 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:36:09.250592  813160 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:36:09.250709  813160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:09.303014  813160 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 06:36:09.293512024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:09.303127  813160 docker.go:318] overlay module found
	I1002 06:36:09.306032  813160 out.go:99] Using the docker driver based on user configuration
	I1002 06:36:09.306068  813160 start.go:304] selected driver: docker
	I1002 06:36:09.306085  813160 start.go:924] validating driver "docker" against <nil>
	I1002 06:36:09.306202  813160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:09.361172  813160 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-02 06:36:09.351799212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:09.361339  813160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:36:09.361641  813160 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 06:36:09.361824  813160 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:36:09.364876  813160 out.go:171] Using Docker driver with root privileges
	I1002 06:36:09.367805  813160 cni.go:84] Creating CNI manager for ""
	I1002 06:36:09.367889  813160 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1002 06:36:09.367904  813160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 06:36:09.368000  813160 start.go:348] cluster config:
	{Name:download-only-492765 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-492765 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 06:36:09.370977  813160 out.go:99] Starting "download-only-492765" primary control-plane node in "download-only-492765" cluster
	I1002 06:36:09.371013  813160 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1002 06:36:09.373943  813160 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1002 06:36:09.374009  813160 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1002 06:36:09.374090  813160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 06:36:09.390870  813160 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:36:09.391775  813160 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 06:36:09.391881  813160 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 06:36:09.430665  813160 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1002 06:36:09.430699  813160 cache.go:58] Caching tarball of preloaded images
	I1002 06:36:09.431546  813160 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1002 06:36:09.434885  813160 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1002 06:36:09.434926  813160 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1002 06:36:09.521708  813160 preload.go:290] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1002 06:36:09.521877  813160 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1002 06:36:12.439494  813160 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1002 06:36:12.439949  813160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/download-only-492765/config.json ...
	I1002 06:36:12.440010  813160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/download-only-492765/config.json: {Name:mk1cc5ad315bcd38f40104bf34735d33b197e72c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 06:36:12.440233  813160 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1002 06:36:12.440501  813160 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21643-811293/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-492765 host does not exist
	  To start a cluster, run: "minikube start -p download-only-492765"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-492765
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-547243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-547243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.017438769s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 06:36:18.989336  813155 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1002 06:36:18.989375  813155 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21643-811293/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-547243
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-547243: exit status 85 (152.289236ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-492765 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-492765 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ delete  │ -p download-only-492765                                                                                                                                                               │ download-only-492765 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │ 02 Oct 25 06:36 UTC │
	│ start   │ -o=json --download-only -p download-only-547243 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-547243 │ jenkins │ v1.37.0 │ 02 Oct 25 06:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 06:36:15
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 06:36:15.025154  813360 out.go:360] Setting OutFile to fd 1 ...
	I1002 06:36:15.025287  813360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:15.025299  813360 out.go:374] Setting ErrFile to fd 2...
	I1002 06:36:15.025304  813360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 06:36:15.025612  813360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 06:36:15.026082  813360 out.go:368] Setting JSON to true
	I1002 06:36:15.026990  813360 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22724,"bootTime":1759364251,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 06:36:15.027074  813360 start.go:140] virtualization:  
	I1002 06:36:15.030692  813360 out.go:99] [download-only-547243] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 06:36:15.030947  813360 notify.go:220] Checking for updates...
	I1002 06:36:15.033792  813360 out.go:171] MINIKUBE_LOCATION=21643
	I1002 06:36:15.036808  813360 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 06:36:15.039789  813360 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 06:36:15.042706  813360 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 06:36:15.045694  813360 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1002 06:36:15.051383  813360 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 06:36:15.051685  813360 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 06:36:15.079916  813360 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 06:36:15.080022  813360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:15.149218  813360 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 06:36:15.140617358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:15.149329  813360 docker.go:318] overlay module found
	I1002 06:36:15.152282  813360 out.go:99] Using the docker driver based on user configuration
	I1002 06:36:15.152320  813360 start.go:304] selected driver: docker
	I1002 06:36:15.152329  813360 start.go:924] validating driver "docker" against <nil>
	I1002 06:36:15.152440  813360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 06:36:15.212074  813360 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-02 06:36:15.203273423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 06:36:15.212229  813360 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 06:36:15.212492  813360 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1002 06:36:15.212649  813360 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 06:36:15.215719  813360 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-547243 host does not exist
	  To start a cluster, run: "minikube start -p download-only-547243"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-547243
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 06:36:20.827736  813155 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-704812 --alsologtostderr --binary-mirror http://127.0.0.1:37961 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-704812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-704812
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-110926
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-110926: exit status 85 (73.26475ms)

                                                
                                                
-- stdout --
	* Profile "addons-110926" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-110926"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-110926
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-110926: exit status 85 (79.954215ms)

                                                
                                                
-- stdout --
	* Profile "addons-110926" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-110926"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (423.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-110926 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-110926 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m3.337455234s)
--- PASS: TestAddons/Setup (423.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-110926 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-110926 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-110926 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-110926 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3c484a3d-86ae-4f51-99f5-938ea90ce981] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3c484a3d-86ae-4f51-99f5-938ea90ce981] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003394523s
addons_test.go:694: (dbg) Run:  kubectl --context addons-110926 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-110926 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-110926 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-110926 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.519068ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-926mp" [128b0b3c-82f0-4c85-91fe-811a2d1d6d8d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00355141s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-bqxnl" [2f31b378-0421-4b8d-b501-d0267a1ef54f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.011685118s
addons_test.go:392: (dbg) Run:  kubectl --context addons-110926 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-110926 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-110926 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.161791484s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 ip
2025/10/02 06:55:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.87s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.869905ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-110926
addons_test.go:332: (dbg) Run:  kubectl --context addons-110926 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-5sxf6" [fcffa54d-3987-4041-8ba6-c912003d5c2a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003885439s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.003329ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fg8z6" [6aa933b4-a4f8-406a-a56a-7d1fc1d857a5] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003712723s
addons_test.go:463: (dbg) Run:  kubectl --context addons-110926 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-110926 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-110926 --alsologtostderr -v=1: (1.000318871s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-cdxjc" [0c7d368f-9ebb-4f12-a286-d31aaa2e3a2d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-cdxjc" [0c7d368f-9ebb-4f12-a286-d31aaa2e3a2d] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 31.003499177s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 addons disable headlamp --alsologtostderr -v=1: (5.850156969s)
--- PASS: TestAddons/parallel/Headlamp (37.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-zwxnx" [d9c9ff69-c3e8-4d18-8bcd-f05dcaa29bf4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003810599s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-pptng" [89459066-9bc0-4b50-b70c-45084a4801eb] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.009594418s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-22kn4" [c1c4e809-6ce5-4869-bdec-70d6f72aafd0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003070792s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-110926 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-110926 addons disable yakd --alsologtostderr -v=1: (5.847069683s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-110926
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-110926: (12.045880521s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-110926
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-110926
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-110926
--- PASS: TestAddons/StoppedEnableDisable (12.33s)

                                                
                                    
x
+
TestCertOptions (43.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-524331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-524331 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (41.008958115s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-524331 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-524331 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-524331 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-524331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-524331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-524331: (1.998157738s)
--- PASS: TestCertOptions (43.75s)

                                                
                                    
x
+
TestCertExpiration (239.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-042983 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1002 08:03:24.880905  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-042983 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (47.673616673s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-042983 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-042983 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.480424727s)
helpers_test.go:175: Cleaning up "cert-expiration-042983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-042983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-042983: (2.451648732s)
--- PASS: TestCertExpiration (239.61s)

                                                
                                    
x
+
TestForceSystemdFlag (36.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-835305 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-835305 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.227977113s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-835305 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-835305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-835305
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-835305: (2.023028712s)
--- PASS: TestForceSystemdFlag (36.56s)

                                                
                                    
x
+
TestForceSystemdEnv (46.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-790889 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1002 08:03:07.964259  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-790889 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.396457623s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-790889 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-790889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-790889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-790889: (2.650137348s)
--- PASS: TestForceSystemdEnv (46.55s)

                                                
                                    
x
+
TestErrorSpam/setup (31.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-426884 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-426884 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-426884 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-426884 --driver=docker  --container-runtime=containerd: (31.64206862s)
--- PASS: TestErrorSpam/setup (31.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 stop: (1.250737276s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426884 --log_dir /tmp/nospam-426884 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21643-811293/.minikube/files/etc/test/nested/copy/813155/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-630775 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1002 07:13:24.889815  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:24.896227  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:24.907623  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:24.929054  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:24.970433  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:25.051931  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:25.213502  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:25.535234  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:26.177271  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-630775 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.636380204s)
--- PASS: TestFunctional/serial/StartWithProxy (49.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 07:13:27.023823  813155 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-630775 --alsologtostderr -v=8
E1002 07:13:27.459325  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:13:30.021626  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-630775 --alsologtostderr -v=8: (7.048801238s)
functional_test.go:678: soft start took 7.051404474s for "functional-630775" cluster.
I1002 07:13:34.072954  813155 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-630775 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cache add registry.k8s.io/pause:3.1
E1002 07:13:35.145053  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 cache add registry.k8s.io/pause:3.1: (1.328600707s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 cache add registry.k8s.io/pause:3.3: (1.120664143s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 cache add registry.k8s.io/pause:latest: (1.027068296s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-630775 /tmp/TestFunctionalserialCacheCmdcacheadd_local182826794/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cache add minikube-local-cache-test:functional-630775
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cache delete minikube-local-cache-test:functional-630775
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-630775
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.219267ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 kubectl -- --context functional-630775 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-630775 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-630775 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 07:13:45.388453  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:14:05.871043  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-630775 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.213890505s)
functional_test.go:776: restart took 46.213993559s for "functional-630775" cluster.
I1002 07:14:27.841140  813155 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (46.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-630775 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 logs: (1.477747801s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 logs --file /tmp/TestFunctionalserialLogsFileCmd3465442987/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 logs --file /tmp/TestFunctionalserialLogsFileCmd3465442987/001/logs.txt: (1.499320415s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-630775 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-630775
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-630775: exit status 115 (581.070073ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31099 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-630775 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 config get cpus: exit status 14 (62.239469ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 config get cpus: exit status 14 (81.609632ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-630775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-630775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (186.939483ms)

                                                
                                                
-- stdout --
	* [functional-630775] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:25:03.735435  870309 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:25:03.735548  870309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:25:03.735560  870309 out.go:374] Setting ErrFile to fd 2...
	I1002 07:25:03.735567  870309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:25:03.735834  870309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:25:03.736186  870309 out.go:368] Setting JSON to false
	I1002 07:25:03.737136  870309 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25653,"bootTime":1759364251,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 07:25:03.737205  870309 start.go:140] virtualization:  
	I1002 07:25:03.740631  870309 out.go:179] * [functional-630775] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 07:25:03.743828  870309 notify.go:220] Checking for updates...
	I1002 07:25:03.743794  870309 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:25:03.747577  870309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:25:03.750573  870309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:25:03.753520  870309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 07:25:03.756399  870309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:25:03.762286  870309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:25:03.765303  870309 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:25:03.765853  870309 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:25:03.801463  870309 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:25:03.801612  870309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:25:03.856113  870309 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:25:03.846990855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:25:03.856232  870309 docker.go:318] overlay module found
	I1002 07:25:03.859012  870309 out.go:179] * Using the docker driver based on existing profile
	I1002 07:25:03.861563  870309 start.go:304] selected driver: docker
	I1002 07:25:03.861584  870309 start.go:924] validating driver "docker" against &{Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:25:03.861697  870309 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:25:03.864960  870309 out.go:203] 
	W1002 07:25:03.867599  870309 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 07:25:03.870333  870309 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-630775 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-630775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-630775 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (221.66526ms)

                                                
                                                
-- stdout --
	* [functional-630775] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:25:05.219704  870650 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:25:05.219894  870650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:25:05.219907  870650 out.go:374] Setting ErrFile to fd 2...
	I1002 07:25:05.219912  870650 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:25:05.220899  870650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:25:05.221332  870650 out.go:368] Setting JSON to false
	I1002 07:25:05.222297  870650 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":25655,"bootTime":1759364251,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 07:25:05.222375  870650 start.go:140] virtualization:  
	I1002 07:25:05.225685  870650 out.go:179] * [functional-630775] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1002 07:25:05.228689  870650 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 07:25:05.228815  870650 notify.go:220] Checking for updates...
	I1002 07:25:05.235440  870650 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 07:25:05.238351  870650 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 07:25:05.241217  870650 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 07:25:05.244100  870650 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 07:25:05.246946  870650 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 07:25:05.250398  870650 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:25:05.251012  870650 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 07:25:05.292881  870650 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 07:25:05.293038  870650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:25:05.358864  870650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 07:25:05.349438375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:25:05.358971  870650 docker.go:318] overlay module found
	I1002 07:25:05.364136  870650 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 07:25:05.367053  870650 start.go:304] selected driver: docker
	I1002 07:25:05.367074  870650 start.go:924] validating driver "docker" against &{Name:functional-630775 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-630775 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 07:25:05.367280  870650 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 07:25:05.370873  870650 out.go:203] 
	W1002 07:25:05.373868  870650 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 07:25:05.376640  870650 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh -n functional-630775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cp functional-630775:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3378646142/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh -n functional-630775 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh -n functional-630775 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/813155/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo cat /etc/test/nested/copy/813155/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/813155.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo cat /etc/ssl/certs/813155.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/813155.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo cat /usr/share/ca-certificates/813155.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8131552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo cat /etc/ssl/certs/8131552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8131552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo cat /usr/share/ca-certificates/8131552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-630775 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 ssh "sudo systemctl is-active docker": exit status 1 (496.572549ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 ssh "sudo systemctl is-active crio": exit status 1 (474.123698ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-630775 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-630775 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-630775 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 864917: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-630775 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 version -o=json --components: (1.155549851s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-630775 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-630775 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a09ca6ff-acf6-4e73-9acf-1b3c085f61f4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a09ca6ff-acf6-4e73-9acf-1b3c085f61f4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.024771743s
I1002 07:14:45.165304  813155 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-630775 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-630775
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-630775
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-630775 image ls --format short --alsologtostderr:
I1002 07:29:06.082096  872055 out.go:360] Setting OutFile to fd 1 ...
I1002 07:29:06.082242  872055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:06.082268  872055 out.go:374] Setting ErrFile to fd 2...
I1002 07:29:06.082286  872055 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:06.082558  872055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
I1002 07:29:06.083210  872055 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:06.083374  872055 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:06.083916  872055 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
I1002 07:29:06.101630  872055 ssh_runner.go:195] Run: systemctl --version
I1002 07:29:06.101778  872055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
I1002 07:29:06.121628  872055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
I1002 07:29:06.215685  872055 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-630775 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-630775  │ sha256:a4c49d │ 831kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ docker.io/kicbase/echo-server               │ functional-630775  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/library/minikube-local-cache-test │ functional-630775  │ sha256:e60aad │ 992B   │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:35f3cb │ 22.9MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-630775 image ls --format table --alsologtostderr:
I1002 07:29:10.313316  872414 out.go:360] Setting OutFile to fd 1 ...
I1002 07:29:10.313503  872414 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:10.313531  872414 out.go:374] Setting ErrFile to fd 2...
I1002 07:29:10.313549  872414 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:10.313819  872414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
I1002 07:29:10.314504  872414 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:10.314682  872414 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:10.315223  872414 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
I1002 07:29:10.332356  872414 ssh_runner.go:195] Run: systemctl --version
I1002 07:29:10.332409  872414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
I1002 07:29:10.350095  872414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
I1002 07:29:10.443582  872414 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-630775 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a4c49d7d972be5ec8254294c727243973d60fc94d1591fd6ac031ddf70ee586c","repoDigests":[],"repoTags":["localhost/my-image:functional-630775"],"size":"830616"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9d
bcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:e60aad1842c1681edcefc11cf98d628c9bb877e7d5402a76db909837d2a040ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-630775"],"size":"992"},{"id":"sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22948447"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952ad
ef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-630775"],"size":"2173567"},{"id":"sha256:a1894772a478e07c67a56e8b
f32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-630775 image ls --format json --alsologtostderr:
I1002 07:29:10.086315  872378 out.go:360] Setting OutFile to fd 1 ...
I1002 07:29:10.086495  872378 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:10.086523  872378 out.go:374] Setting ErrFile to fd 2...
I1002 07:29:10.086530  872378 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:10.086941  872378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
I1002 07:29:10.088127  872378 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:10.088297  872378 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:10.089052  872378 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
I1002 07:29:10.110276  872378 ssh_runner.go:195] Run: systemctl --version
I1002 07:29:10.110335  872378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
I1002 07:29:10.128154  872378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
I1002 07:29:10.227510  872378 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-630775 image ls --format yaml --alsologtostderr:
- id: sha256:e60aad1842c1681edcefc11cf98d628c9bb877e7d5402a76db909837d2a040ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-630775
size: "992"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-630775
size: "2173567"
- id: sha256:35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
repoTags:
- docker.io/library/nginx:alpine
size: "22948447"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-630775 image ls --format yaml --alsologtostderr:
I1002 07:29:06.303466  872090 out.go:360] Setting OutFile to fd 1 ...
I1002 07:29:06.303653  872090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:06.303679  872090 out.go:374] Setting ErrFile to fd 2...
I1002 07:29:06.303699  872090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:06.304014  872090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
I1002 07:29:06.304691  872090 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:06.304909  872090 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:06.305407  872090 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
I1002 07:29:06.324330  872090 ssh_runner.go:195] Run: systemctl --version
I1002 07:29:06.324383  872090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
I1002 07:29:06.341306  872090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
I1002 07:29:06.435216  872090 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 ssh pgrep buildkitd: exit status 1 (263.438497ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image build -t localhost/my-image:functional-630775 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 image build -t localhost/my-image:functional-630775 testdata/build --alsologtostderr: (3.063923422s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-630775 image build -t localhost/my-image:functional-630775 testdata/build --alsologtostderr:
I1002 07:29:06.785490  872189 out.go:360] Setting OutFile to fd 1 ...
I1002 07:29:06.786278  872189 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:06.786306  872189 out.go:374] Setting ErrFile to fd 2...
I1002 07:29:06.786312  872189 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 07:29:06.786576  872189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
I1002 07:29:06.787287  872189 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:06.787996  872189 config.go:182] Loaded profile config "functional-630775": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1002 07:29:06.788492  872189 cli_runner.go:164] Run: docker container inspect functional-630775 --format={{.State.Status}}
I1002 07:29:06.805764  872189 ssh_runner.go:195] Run: systemctl --version
I1002 07:29:06.805819  872189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-630775
I1002 07:29:06.824875  872189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33878 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/functional-630775/id_rsa Username:docker}
I1002 07:29:06.926875  872189 build_images.go:161] Building image from path: /tmp/build.505215683.tar
I1002 07:29:06.926993  872189 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 07:29:06.934963  872189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.505215683.tar
I1002 07:29:06.939020  872189 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.505215683.tar: stat -c "%s %y" /var/lib/minikube/build/build.505215683.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.505215683.tar': No such file or directory
I1002 07:29:06.939052  872189 ssh_runner.go:362] scp /tmp/build.505215683.tar --> /var/lib/minikube/build/build.505215683.tar (3072 bytes)
I1002 07:29:06.961782  872189 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.505215683
I1002 07:29:06.970883  872189 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.505215683 -xf /var/lib/minikube/build/build.505215683.tar
I1002 07:29:06.978895  872189 containerd.go:394] Building image: /var/lib/minikube/build/build.505215683
I1002 07:29:06.978989  872189 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.505215683 --local dockerfile=/var/lib/minikube/build/build.505215683 --output type=image,name=localhost/my-image:functional-630775
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:08e22a3e55ea05404d32c3dac9371d3b18354039e5f67886589990e946b90e0e
#8 exporting manifest sha256:08e22a3e55ea05404d32c3dac9371d3b18354039e5f67886589990e946b90e0e 0.0s done
#8 exporting config sha256:a4c49d7d972be5ec8254294c727243973d60fc94d1591fd6ac031ddf70ee586c 0.0s done
#8 naming to localhost/my-image:functional-630775 done
#8 DONE 0.2s
I1002 07:29:09.771095  872189 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.505215683 --local dockerfile=/var/lib/minikube/build/build.505215683 --output type=image,name=localhost/my-image:functional-630775: (2.792074406s)
I1002 07:29:09.771164  872189 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.505215683
I1002 07:29:09.780224  872189 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.505215683.tar
I1002 07:29:09.790368  872189 build_images.go:217] Built localhost/my-image:functional-630775 from /tmp/build.505215683.tar
I1002 07:29:09.790397  872189 build_images.go:133] succeeded building to: functional-630775
I1002 07:29:09.790402  872189 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-630775
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image load --daemon kicbase/echo-server:functional-630775 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 image load --daemon kicbase/echo-server:functional-630775 --alsologtostderr: (1.20171248s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image load --daemon kicbase/echo-server:functional-630775 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-630775
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image load --daemon kicbase/echo-server:functional-630775 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image save kicbase/echo-server:functional-630775 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image rm kicbase/echo-server:functional-630775 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-630775
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 image save --daemon kicbase/echo-server:functional-630775 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-630775
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 update-context --alsologtostderr -v=2
E1002 07:29:47.960604  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-630775 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.131.33 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-630775 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdany-port1050225749/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759389286511544596" to /tmp/TestFunctionalparallelMountCmdany-port1050225749/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759389286511544596" to /tmp/TestFunctionalparallelMountCmdany-port1050225749/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759389286511544596" to /tmp/TestFunctionalparallelMountCmdany-port1050225749/001/test-1759389286511544596
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T /mount-9p | grep 9p"
E1002 07:14:46.832589  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (537.653353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:14:47.049494  813155 retry.go:31] will retry after 585.508564ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 07:14 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 07:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 07:14 test-1759389286511544596
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh cat /mount-9p/test-1759389286511544596
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-630775 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [cd3d9bed-0d8b-4f4b-8bbd-0c26a4399c58] Pending
helpers_test.go:352: "busybox-mount" [cd3d9bed-0d8b-4f4b-8bbd-0c26a4399c58] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [cd3d9bed-0d8b-4f4b-8bbd-0c26a4399c58] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [cd3d9bed-0d8b-4f4b-8bbd-0c26a4399c58] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003443917s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-630775 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdany-port1050225749/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdspecific-port2182164821/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (409.95774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 07:14:55.344479  813155 retry.go:31] will retry after 704.990003ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdspecific-port2182164821/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-630775 ssh "sudo umount -f /mount-9p": exit status 1 (274.510982ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-630775 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdspecific-port2182164821/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-630775 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-630775 /tmp/TestFunctionalparallelMountCmdVerifyCleanup394669407/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "372.894804ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "59.598641ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "395.00616ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.458966ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 service list: (1.301883608s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-630775 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-630775 service list -o json: (1.298603732s)
functional_test.go:1504: Took "1.298692764s" to run "out/minikube-linux-arm64 -p functional-630775 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.30s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-630775
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-630775
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-630775
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (170.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m48.966946751s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5: (1.212120711s)
--- PASS: TestMultiControlPlane/serial/StartCluster (170.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 kubectl -- rollout status deployment/busybox: (5.379604131s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-dfbkp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tnm84 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tqcxt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-dfbkp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tnm84 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tqcxt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-dfbkp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tnm84 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tqcxt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-dfbkp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-dfbkp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tnm84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tnm84 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tqcxt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 kubectl -- exec busybox-7b57f96db7-tqcxt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 node add --alsologtostderr -v 5
E1002 07:33:24.880906  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 node add --alsologtostderr -v 5: (59.702790389s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5: (1.036683503s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-934412 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.070481218s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 status --output json --alsologtostderr -v 5: (1.066439783s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp testdata/cp-test.txt ha-934412:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3946702587/001/cp-test_ha-934412.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412:/home/docker/cp-test.txt ha-934412-m02:/home/docker/cp-test_ha-934412_ha-934412-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test_ha-934412_ha-934412-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412:/home/docker/cp-test.txt ha-934412-m03:/home/docker/cp-test_ha-934412_ha-934412-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test_ha-934412_ha-934412-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412:/home/docker/cp-test.txt ha-934412-m04:/home/docker/cp-test_ha-934412_ha-934412-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test_ha-934412_ha-934412-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp testdata/cp-test.txt ha-934412-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3946702587/001/cp-test_ha-934412-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m02:/home/docker/cp-test.txt ha-934412:/home/docker/cp-test_ha-934412-m02_ha-934412.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test_ha-934412-m02_ha-934412.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m02:/home/docker/cp-test.txt ha-934412-m03:/home/docker/cp-test_ha-934412-m02_ha-934412-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test_ha-934412-m02_ha-934412-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m02:/home/docker/cp-test.txt ha-934412-m04:/home/docker/cp-test_ha-934412-m02_ha-934412-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test_ha-934412-m02_ha-934412-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp testdata/cp-test.txt ha-934412-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3946702587/001/cp-test_ha-934412-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m03:/home/docker/cp-test.txt ha-934412:/home/docker/cp-test_ha-934412-m03_ha-934412.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test_ha-934412-m03_ha-934412.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m03:/home/docker/cp-test.txt ha-934412-m02:/home/docker/cp-test_ha-934412-m03_ha-934412-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test_ha-934412-m03_ha-934412-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m03:/home/docker/cp-test.txt ha-934412-m04:/home/docker/cp-test_ha-934412-m03_ha-934412-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test_ha-934412-m03_ha-934412-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp testdata/cp-test.txt ha-934412-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3946702587/001/cp-test_ha-934412-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m04:/home/docker/cp-test.txt ha-934412:/home/docker/cp-test_ha-934412-m04_ha-934412.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412 "sudo cat /home/docker/cp-test_ha-934412-m04_ha-934412.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m04:/home/docker/cp-test.txt ha-934412-m02:/home/docker/cp-test_ha-934412-m04_ha-934412-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m02 "sudo cat /home/docker/cp-test_ha-934412-m04_ha-934412-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 cp ha-934412-m04:/home/docker/cp-test.txt ha-934412-m03:/home/docker/cp-test_ha-934412-m04_ha-934412-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 ssh -n ha-934412-m03 "sudo cat /home/docker/cp-test_ha-934412-m04_ha-934412-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 node stop m02 --alsologtostderr -v 5
E1002 07:34:36.680372  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:36.686756  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:36.698215  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:36.719673  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:36.761136  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:36.842510  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:37.005789  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:37.327535  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:37.969543  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:39.250854  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:41.813724  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 node stop m02 --alsologtostderr -v 5: (11.999738841s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5: exit status 7 (775.185658ms)

                                                
                                                
-- stdout --
	ha-934412
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-934412-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-934412-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-934412-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:34:43.989876  889413 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:34:43.990094  889413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:34:43.990127  889413 out.go:374] Setting ErrFile to fd 2...
	I1002 07:34:43.990149  889413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:34:43.990443  889413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:34:43.990664  889413 out.go:368] Setting JSON to false
	I1002 07:34:43.990726  889413 mustload.go:65] Loading cluster: ha-934412
	I1002 07:34:43.990821  889413 notify.go:220] Checking for updates...
	I1002 07:34:43.991236  889413 config.go:182] Loaded profile config "ha-934412": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:34:43.991274  889413 status.go:174] checking status of ha-934412 ...
	I1002 07:34:43.992140  889413 cli_runner.go:164] Run: docker container inspect ha-934412 --format={{.State.Status}}
	I1002 07:34:44.018946  889413 status.go:371] ha-934412 host status = "Running" (err=<nil>)
	I1002 07:34:44.018970  889413 host.go:66] Checking if "ha-934412" exists ...
	I1002 07:34:44.019286  889413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-934412
	I1002 07:34:44.061374  889413 host.go:66] Checking if "ha-934412" exists ...
	I1002 07:34:44.061691  889413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:34:44.061731  889413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-934412
	I1002 07:34:44.083304  889413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33883 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/ha-934412/id_rsa Username:docker}
	I1002 07:34:44.182628  889413 ssh_runner.go:195] Run: systemctl --version
	I1002 07:34:44.189640  889413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:34:44.203266  889413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:34:44.267845  889413 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-02 07:34:44.252480143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:34:44.268590  889413 kubeconfig.go:125] found "ha-934412" server: "https://192.168.49.254:8443"
	I1002 07:34:44.268651  889413 api_server.go:166] Checking apiserver status ...
	I1002 07:34:44.268717  889413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:34:44.284934  889413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	I1002 07:34:44.293514  889413 api_server.go:182] apiserver freezer: "12:freezer:/docker/529ceb2ed686a4ad540a3b95d301de20b44c9e582b178d011c6267b15dcefea5/kubepods/burstable/podad875f0fd5d7273c26a03ad1c334c193/82d0de5dec031aa919b5ff2eb7e80baf1f5ce9b081011c780482015da248dde2"
	I1002 07:34:44.293590  889413 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/529ceb2ed686a4ad540a3b95d301de20b44c9e582b178d011c6267b15dcefea5/kubepods/burstable/podad875f0fd5d7273c26a03ad1c334c193/82d0de5dec031aa919b5ff2eb7e80baf1f5ce9b081011c780482015da248dde2/freezer.state
	I1002 07:34:44.301117  889413 api_server.go:204] freezer state: "THAWED"
	I1002 07:34:44.301146  889413 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 07:34:44.310560  889413 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 07:34:44.310592  889413 status.go:463] ha-934412 apiserver status = Running (err=<nil>)
	I1002 07:34:44.310604  889413 status.go:176] ha-934412 status: &{Name:ha-934412 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:34:44.310621  889413 status.go:174] checking status of ha-934412-m02 ...
	I1002 07:34:44.310935  889413 cli_runner.go:164] Run: docker container inspect ha-934412-m02 --format={{.State.Status}}
	I1002 07:34:44.329612  889413 status.go:371] ha-934412-m02 host status = "Stopped" (err=<nil>)
	I1002 07:34:44.329636  889413 status.go:384] host is not running, skipping remaining checks
	I1002 07:34:44.329649  889413 status.go:176] ha-934412-m02 status: &{Name:ha-934412-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:34:44.329670  889413 status.go:174] checking status of ha-934412-m03 ...
	I1002 07:34:44.329979  889413 cli_runner.go:164] Run: docker container inspect ha-934412-m03 --format={{.State.Status}}
	I1002 07:34:44.347297  889413 status.go:371] ha-934412-m03 host status = "Running" (err=<nil>)
	I1002 07:34:44.347319  889413 host.go:66] Checking if "ha-934412-m03" exists ...
	I1002 07:34:44.347942  889413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-934412-m03
	I1002 07:34:44.365774  889413 host.go:66] Checking if "ha-934412-m03" exists ...
	I1002 07:34:44.366111  889413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:34:44.366170  889413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-934412-m03
	I1002 07:34:44.383776  889413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33894 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/ha-934412-m03/id_rsa Username:docker}
	I1002 07:34:44.478711  889413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:34:44.492037  889413 kubeconfig.go:125] found "ha-934412" server: "https://192.168.49.254:8443"
	I1002 07:34:44.492066  889413 api_server.go:166] Checking apiserver status ...
	I1002 07:34:44.492112  889413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:34:44.504736  889413 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	I1002 07:34:44.514288  889413 api_server.go:182] apiserver freezer: "12:freezer:/docker/b8c8f1c03d8afb859bd59fece6ada96515c8497a9a334288f92e291303adce8e/kubepods/burstable/pod3ed23448036c66f60be65aead480b1e8/17e5504427139e9c3e0d9049db2b569c2971b840891dcfee9691bc7604d9dce4"
	I1002 07:34:44.514379  889413 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8c8f1c03d8afb859bd59fece6ada96515c8497a9a334288f92e291303adce8e/kubepods/burstable/pod3ed23448036c66f60be65aead480b1e8/17e5504427139e9c3e0d9049db2b569c2971b840891dcfee9691bc7604d9dce4/freezer.state
	I1002 07:34:44.526908  889413 api_server.go:204] freezer state: "THAWED"
	I1002 07:34:44.526979  889413 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1002 07:34:44.535765  889413 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1002 07:34:44.535795  889413 status.go:463] ha-934412-m03 apiserver status = Running (err=<nil>)
	I1002 07:34:44.535806  889413 status.go:176] ha-934412-m03 status: &{Name:ha-934412-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:34:44.535855  889413 status.go:174] checking status of ha-934412-m04 ...
	I1002 07:34:44.536187  889413 cli_runner.go:164] Run: docker container inspect ha-934412-m04 --format={{.State.Status}}
	I1002 07:34:44.553926  889413 status.go:371] ha-934412-m04 host status = "Running" (err=<nil>)
	I1002 07:34:44.553952  889413 host.go:66] Checking if "ha-934412-m04" exists ...
	I1002 07:34:44.554255  889413 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-934412-m04
	I1002 07:34:44.575314  889413 host.go:66] Checking if "ha-934412-m04" exists ...
	I1002 07:34:44.575668  889413 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:34:44.575717  889413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-934412-m04
	I1002 07:34:44.593713  889413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33899 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/ha-934412-m04/id_rsa Username:docker}
	I1002 07:34:44.686020  889413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:34:44.698979  889413 status.go:176] ha-934412-m04 status: &{Name:ha-934412-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.017455985s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 node start m02 --alsologtostderr -v 5
E1002 07:34:46.935681  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 07:34:57.177192  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 node start m02 --alsologtostderr -v 5: (11.818802249s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5: (1.342574672s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.013551995s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 stop --alsologtostderr -v 5
E1002 07:35:17.658659  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 stop --alsologtostderr -v 5: (37.016364057s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 start --wait true --alsologtostderr -v 5
E1002 07:35:58.620810  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 start --wait true --alsologtostderr -v 5: (1m0.874296364s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 node delete m03 --alsologtostderr -v 5: (9.41463289s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 stop --alsologtostderr -v 5
E1002 07:37:20.542430  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 stop --alsologtostderr -v 5: (35.570522824s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5: exit status 7 (109.646824ms)

                                                
                                                
-- stdout --
	ha-934412
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-934412-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-934412-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:37:25.955314  904463 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:37:25.955432  904463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:37:25.955443  904463 out.go:374] Setting ErrFile to fd 2...
	I1002 07:37:25.955448  904463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:37:25.955716  904463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:37:25.955928  904463 out.go:368] Setting JSON to false
	I1002 07:37:25.955965  904463 mustload.go:65] Loading cluster: ha-934412
	I1002 07:37:25.956054  904463 notify.go:220] Checking for updates...
	I1002 07:37:25.956427  904463 config.go:182] Loaded profile config "ha-934412": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:37:25.956566  904463 status.go:174] checking status of ha-934412 ...
	I1002 07:37:25.957425  904463 cli_runner.go:164] Run: docker container inspect ha-934412 --format={{.State.Status}}
	I1002 07:37:25.976047  904463 status.go:371] ha-934412 host status = "Stopped" (err=<nil>)
	I1002 07:37:25.976068  904463 status.go:384] host is not running, skipping remaining checks
	I1002 07:37:25.976074  904463 status.go:176] ha-934412 status: &{Name:ha-934412 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:37:25.976103  904463 status.go:174] checking status of ha-934412-m02 ...
	I1002 07:37:25.976390  904463 cli_runner.go:164] Run: docker container inspect ha-934412-m02 --format={{.State.Status}}
	I1002 07:37:25.997458  904463 status.go:371] ha-934412-m02 host status = "Stopped" (err=<nil>)
	I1002 07:37:25.997479  904463 status.go:384] host is not running, skipping remaining checks
	I1002 07:37:25.997503  904463 status.go:176] ha-934412-m02 status: &{Name:ha-934412-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:37:25.997521  904463 status.go:174] checking status of ha-934412-m04 ...
	I1002 07:37:25.997807  904463 cli_runner.go:164] Run: docker container inspect ha-934412-m04 --format={{.State.Status}}
	I1002 07:37:26.018044  904463 status.go:371] ha-934412-m04 host status = "Stopped" (err=<nil>)
	I1002 07:37:26.018066  904463 status.go:384] host is not running, skipping remaining checks
	I1002 07:37:26.018074  904463 status.go:176] ha-934412-m04 status: &{Name:ha-934412-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1002 07:38:24.880944  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m6.60670853s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (59.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 node add --control-plane --alsologtostderr -v 5: (58.726370239s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-934412 status --alsologtostderr -v 5: (1.091514402s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (59.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.056637133s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-877861 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1002 07:40:04.388938  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-877861 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m20.491076134s)
--- PASS: TestJSONOutput/start/Command (80.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-877861 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-877861 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-877861 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-877861 --output=json --user=testUser: (5.800141671s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-547928 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-547928 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (94.814185ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0398cb2e-6dbc-4591-a469-a59e1b5db14e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-547928] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc92f69b-3a8d-411d-85d4-7be9f3ffabed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"e36454ce-f35d-43db-b637-b0ed9d0cd0ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"660bb6c6-17db-4df8-926c-7a2d399f5063","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig"}}
	{"specversion":"1.0","id":"9a91ef2e-5225-4dcb-b8b3-7d139ac9a58c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube"}}
	{"specversion":"1.0","id":"ff11ce17-fdae-4c4f-b779-05cd2d1e741c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b0f5ef41-7148-4877-88c5-d94c15360faa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b83cb81c-134e-4552-ac57-997ba2a8b680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-547928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-547928
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-052366 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-052366 --network=: (36.483812366s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-052366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-052366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-052366: (2.121803956s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (41.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-237793 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-237793 --network=bridge: (39.680264505s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-237793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-237793
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-237793: (1.993809582s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (41.70s)

                                                
                                    
x
+
TestKicExistingNetwork (33.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 07:42:36.270341  813155 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 07:42:36.286316  813155 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 07:42:36.286400  813155 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 07:42:36.286420  813155 cli_runner.go:164] Run: docker network inspect existing-network
W1002 07:42:36.302050  813155 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 07:42:36.302082  813155 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 07:42:36.302100  813155 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 07:42:36.302213  813155 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 07:42:36.318158  813155 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-40f09c1369bc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:56:1f:fd:be:2a:e8} reservation:<nil>}
I1002 07:42:36.318462  813155 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016710f0}
I1002 07:42:36.318487  813155 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1002 07:42:36.318542  813155 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 07:42:36.376594  813155 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-620200 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-620200 --network=existing-network: (31.011519569s)
helpers_test.go:175: Cleaning up "existing-network-620200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-620200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-620200: (2.079909327s)
I1002 07:43:09.487663  813155 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.23s)

                                                
                                    
x
+
TestKicCustomSubnet (33.36s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-090083 --subnet=192.168.60.0/24
E1002 07:43:24.880964  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-090083 --subnet=192.168.60.0/24: (31.173151565s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-090083 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-090083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-090083
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-090083: (2.154690021s)
--- PASS: TestKicCustomSubnet (33.36s)

                                                
                                    
x
+
TestKicStaticIP (35.69s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-920223 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-920223 --static-ip=192.168.200.200: (33.473719441s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-920223 ip
helpers_test.go:175: Cleaning up "static-ip-920223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-920223
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-920223: (2.062194172s)
--- PASS: TestKicStaticIP (35.69s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.43s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-242785 --driver=docker  --container-runtime=containerd
E1002 07:44:36.680325  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-242785 --driver=docker  --container-runtime=containerd: (35.838062364s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-245181 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-245181 --driver=docker  --container-runtime=containerd: (33.292713397s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-242785
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-245181
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-245181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-245181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-245181: (1.966273634s)
helpers_test.go:175: Cleaning up "first-242785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-242785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-242785: (1.932393754s)
--- PASS: TestMinikubeProfile (74.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-307857 --memory=3072 --mount-string /tmp/TestMountStartserial812027721/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-307857 --memory=3072 --mount-string /tmp/TestMountStartserial812027721/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.820370426s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-307857 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-309777 --memory=3072 --mount-string /tmp/TestMountStartserial812027721/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-309777 --memory=3072 --mount-string /tmp/TestMountStartserial812027721/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.601387321s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-309777 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-307857 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-307857 --alsologtostderr -v=5: (1.612632972s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-309777 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-309777
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-309777: (1.219841796s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-309777
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-309777: (6.646081443s)
--- PASS: TestMountStart/serial/RestartStopped (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-309777 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-776062 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1002 07:46:27.962243  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-776062 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m49.116118735s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-776062 -- rollout status deployment/busybox: (3.448097407s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-h5m75 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-n4ffr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-h5m75 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-n4ffr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-h5m75 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-n4ffr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-h5m75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-h5m75 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-n4ffr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-776062 -- exec busybox-7b57f96db7-n4ffr -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-776062 -v=5 --alsologtostderr
E1002 07:48:24.881013  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-776062 -v=5 --alsologtostderr: (27.68058268s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-776062 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp testdata/cp-test.txt multinode-776062:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3443081652/001/cp-test_multinode-776062.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062:/home/docker/cp-test.txt multinode-776062-m02:/home/docker/cp-test_multinode-776062_multinode-776062-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m02 "sudo cat /home/docker/cp-test_multinode-776062_multinode-776062-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062:/home/docker/cp-test.txt multinode-776062-m03:/home/docker/cp-test_multinode-776062_multinode-776062-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m03 "sudo cat /home/docker/cp-test_multinode-776062_multinode-776062-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp testdata/cp-test.txt multinode-776062-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3443081652/001/cp-test_multinode-776062-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062-m02:/home/docker/cp-test.txt multinode-776062:/home/docker/cp-test_multinode-776062-m02_multinode-776062.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062 "sudo cat /home/docker/cp-test_multinode-776062-m02_multinode-776062.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062-m02:/home/docker/cp-test.txt multinode-776062-m03:/home/docker/cp-test_multinode-776062-m02_multinode-776062-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m03 "sudo cat /home/docker/cp-test_multinode-776062-m02_multinode-776062-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp testdata/cp-test.txt multinode-776062-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3443081652/001/cp-test_multinode-776062-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062-m03:/home/docker/cp-test.txt multinode-776062:/home/docker/cp-test_multinode-776062-m03_multinode-776062.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062 "sudo cat /home/docker/cp-test_multinode-776062-m03_multinode-776062.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 cp multinode-776062-m03:/home/docker/cp-test.txt multinode-776062-m02:/home/docker/cp-test_multinode-776062-m03_multinode-776062-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 ssh -n multinode-776062-m02 "sudo cat /home/docker/cp-test_multinode-776062-m03_multinode-776062-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-776062 node stop m03: (1.201245523s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-776062 status: exit status 7 (537.633797ms)

                                                
                                                
-- stdout --
	multinode-776062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-776062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-776062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr: exit status 7 (528.053343ms)

                                                
                                                
-- stdout --
	multinode-776062
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-776062-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-776062-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:48:38.874158  957843 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:48:38.874280  957843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:48:38.874292  957843 out.go:374] Setting ErrFile to fd 2...
	I1002 07:48:38.874309  957843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:48:38.875068  957843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:48:38.875340  957843 out.go:368] Setting JSON to false
	I1002 07:48:38.875400  957843 mustload.go:65] Loading cluster: multinode-776062
	I1002 07:48:38.875425  957843 notify.go:220] Checking for updates...
	I1002 07:48:38.875881  957843 config.go:182] Loaded profile config "multinode-776062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:48:38.875920  957843 status.go:174] checking status of multinode-776062 ...
	I1002 07:48:38.877424  957843 cli_runner.go:164] Run: docker container inspect multinode-776062 --format={{.State.Status}}
	I1002 07:48:38.895854  957843 status.go:371] multinode-776062 host status = "Running" (err=<nil>)
	I1002 07:48:38.895885  957843 host.go:66] Checking if "multinode-776062" exists ...
	I1002 07:48:38.896192  957843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-776062
	I1002 07:48:38.918533  957843 host.go:66] Checking if "multinode-776062" exists ...
	I1002 07:48:38.918858  957843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:48:38.918905  957843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-776062
	I1002 07:48:38.939790  957843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34004 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/multinode-776062/id_rsa Username:docker}
	I1002 07:48:39.038148  957843 ssh_runner.go:195] Run: systemctl --version
	I1002 07:48:39.044734  957843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:48:39.058001  957843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 07:48:39.129341  957843 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-02 07:48:39.120362839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 07:48:39.129897  957843 kubeconfig.go:125] found "multinode-776062" server: "https://192.168.67.2:8443"
	I1002 07:48:39.129931  957843 api_server.go:166] Checking apiserver status ...
	I1002 07:48:39.129977  957843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 07:48:39.142778  957843 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	I1002 07:48:39.151131  957843 api_server.go:182] apiserver freezer: "12:freezer:/docker/8b11458963216c3234f62c73182ac786b9b91b160b04c3988e2f835ce92f2b18/kubepods/burstable/podb1718ab21a76a21364454863cb5b1ead/b519943b46f0188e6a4ff03fbc3f9eaf51d54ff4d5ce6dc9dc0bb11346467cb5"
	I1002 07:48:39.151207  957843 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8b11458963216c3234f62c73182ac786b9b91b160b04c3988e2f835ce92f2b18/kubepods/burstable/podb1718ab21a76a21364454863cb5b1ead/b519943b46f0188e6a4ff03fbc3f9eaf51d54ff4d5ce6dc9dc0bb11346467cb5/freezer.state
	I1002 07:48:39.158926  957843 api_server.go:204] freezer state: "THAWED"
	I1002 07:48:39.158953  957843 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1002 07:48:39.167136  957843 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1002 07:48:39.167162  957843 status.go:463] multinode-776062 apiserver status = Running (err=<nil>)
	I1002 07:48:39.167174  957843 status.go:176] multinode-776062 status: &{Name:multinode-776062 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:48:39.167190  957843 status.go:174] checking status of multinode-776062-m02 ...
	I1002 07:48:39.167494  957843 cli_runner.go:164] Run: docker container inspect multinode-776062-m02 --format={{.State.Status}}
	I1002 07:48:39.183977  957843 status.go:371] multinode-776062-m02 host status = "Running" (err=<nil>)
	I1002 07:48:39.184003  957843 host.go:66] Checking if "multinode-776062-m02" exists ...
	I1002 07:48:39.184300  957843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-776062-m02
	I1002 07:48:39.201493  957843 host.go:66] Checking if "multinode-776062-m02" exists ...
	I1002 07:48:39.201802  957843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 07:48:39.201857  957843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-776062-m02
	I1002 07:48:39.218480  957843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34009 SSHKeyPath:/home/jenkins/minikube-integration/21643-811293/.minikube/machines/multinode-776062-m02/id_rsa Username:docker}
	I1002 07:48:39.314004  957843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 07:48:39.326652  957843 status.go:176] multinode-776062-m02 status: &{Name:multinode-776062-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:48:39.326697  957843 status.go:174] checking status of multinode-776062-m03 ...
	I1002 07:48:39.327033  957843 cli_runner.go:164] Run: docker container inspect multinode-776062-m03 --format={{.State.Status}}
	I1002 07:48:39.343722  957843 status.go:371] multinode-776062-m03 host status = "Stopped" (err=<nil>)
	I1002 07:48:39.343747  957843 status.go:384] host is not running, skipping remaining checks
	I1002 07:48:39.343753  957843 status.go:176] multinode-776062-m03 status: &{Name:multinode-776062-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-776062 node start m03 -v=5 --alsologtostderr: (7.36896829s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (73.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-776062
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-776062
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-776062: (24.868155988s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-776062 --wait=true -v=5 --alsologtostderr
E1002 07:49:36.680362  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-776062 --wait=true -v=5 --alsologtostderr: (48.035439672s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-776062
--- PASS: TestMultiNode/serial/RestartKeepsNodes (73.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-776062 node delete m03: (4.878255518s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-776062 stop: (23.681351113s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-776062 status: exit status 7 (99.256775ms)

                                                
                                                
-- stdout --
	multinode-776062
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-776062-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr: exit status 7 (93.355018ms)

                                                
                                                
-- stdout --
	multinode-776062
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-776062-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 07:50:29.926899  966627 out.go:360] Setting OutFile to fd 1 ...
	I1002 07:50:29.927029  966627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:50:29.927040  966627 out.go:374] Setting ErrFile to fd 2...
	I1002 07:50:29.927045  966627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 07:50:29.927297  966627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 07:50:29.927488  966627 out.go:368] Setting JSON to false
	I1002 07:50:29.927531  966627 mustload.go:65] Loading cluster: multinode-776062
	I1002 07:50:29.927602  966627 notify.go:220] Checking for updates...
	I1002 07:50:29.928889  966627 config.go:182] Loaded profile config "multinode-776062": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 07:50:29.928914  966627 status.go:174] checking status of multinode-776062 ...
	I1002 07:50:29.929562  966627 cli_runner.go:164] Run: docker container inspect multinode-776062 --format={{.State.Status}}
	I1002 07:50:29.947388  966627 status.go:371] multinode-776062 host status = "Stopped" (err=<nil>)
	I1002 07:50:29.947413  966627 status.go:384] host is not running, skipping remaining checks
	I1002 07:50:29.947420  966627 status.go:176] multinode-776062 status: &{Name:multinode-776062 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 07:50:29.947450  966627 status.go:174] checking status of multinode-776062-m02 ...
	I1002 07:50:29.947744  966627 cli_runner.go:164] Run: docker container inspect multinode-776062-m02 --format={{.State.Status}}
	I1002 07:50:29.970952  966627 status.go:371] multinode-776062-m02 host status = "Stopped" (err=<nil>)
	I1002 07:50:29.970978  966627 status.go:384] host is not running, skipping remaining checks
	I1002 07:50:29.970992  966627 status.go:176] multinode-776062-m02 status: &{Name:multinode-776062-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-776062 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1002 07:50:59.751545  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-776062 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.71514673s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-776062 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-776062
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-776062-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-776062-m02 --driver=docker  --container-runtime=containerd: exit status 14 (93.62197ms)

                                                
                                                
-- stdout --
	* [multinode-776062-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-776062-m02' is duplicated with machine name 'multinode-776062-m02' in profile 'multinode-776062'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-776062-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-776062-m03 --driver=docker  --container-runtime=containerd: (35.11858712s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-776062
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-776062: exit status 80 (359.321971ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-776062 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-776062-m03 already exists in multinode-776062-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-776062-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-776062-m03: (1.971859059s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.60s)

                                                
                                    
x
+
TestPreload (124.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-782603 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-782603 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (57.373687509s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-782603 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-782603 image pull gcr.io/k8s-minikube/busybox: (2.318039377s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-782603
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-782603: (5.726488309s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-782603 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1002 07:53:24.880891  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-782603 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (56.716790049s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-782603 image list
helpers_test.go:175: Cleaning up "test-preload-782603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-782603
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-782603: (2.275565712s)
--- PASS: TestPreload (124.64s)

                                                
                                    
x
+
TestScheduledStopUnix (109.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-270293 --memory=3072 --driver=docker  --container-runtime=containerd
E1002 07:54:36.680402  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-270293 --memory=3072 --driver=docker  --container-runtime=containerd: (32.79952375s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-270293 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-270293 -n scheduled-stop-270293
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-270293 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1002 07:54:42.300971  813155 retry.go:31] will retry after 125.838µs: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.302163  813155 retry.go:31] will retry after 110.425µs: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.303463  813155 retry.go:31] will retry after 255.228µs: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.307373  813155 retry.go:31] will retry after 466.586µs: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.308550  813155 retry.go:31] will retry after 487.483µs: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.309741  813155 retry.go:31] will retry after 732.876µs: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.310937  813155 retry.go:31] will retry after 1.46077ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.313542  813155 retry.go:31] will retry after 992.123µs: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.314753  813155 retry.go:31] will retry after 1.865196ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.317004  813155 retry.go:31] will retry after 5.131326ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.323285  813155 retry.go:31] will retry after 7.524542ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.331587  813155 retry.go:31] will retry after 12.02631ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.343796  813155 retry.go:31] will retry after 14.709669ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.358865  813155 retry.go:31] will retry after 28.367863ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.388891  813155 retry.go:31] will retry after 21.187152ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
I1002 07:54:42.411122  813155 retry.go:31] will retry after 46.590823ms: open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/scheduled-stop-270293/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-270293 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-270293 -n scheduled-stop-270293
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-270293
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-270293 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-270293
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-270293: exit status 7 (74.887828ms)

                                                
                                                
-- stdout --
	scheduled-stop-270293
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-270293 -n scheduled-stop-270293
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-270293 -n scheduled-stop-270293: exit status 7 (73.271053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-270293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-270293
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-270293: (4.619699607s)
--- PASS: TestScheduledStopUnix (109.17s)

                                                
                                    
x
+
TestInsufficientStorage (12.1s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-223228 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-223228 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.617510834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"60046343-ee68-4651-a1ee-66af5751b392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-223228] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0749a4b6-9e01-4138-b7b8-b029278c8a4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21643"}}
	{"specversion":"1.0","id":"1ed6f731-848b-416d-9cd6-50011b6bf5f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6f86c874-513d-4e90-a0dd-70ac9ba684f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig"}}
	{"specversion":"1.0","id":"79ece109-9002-4938-a8e0-14c78ef531ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube"}}
	{"specversion":"1.0","id":"022858ec-dbb8-42dd-83d3-7c54f45284c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"058832dc-9a83-4bda-8422-dcb744408498","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0e1e728-a0d7-43e6-8a0a-1d41b11e99b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e2cac273-5506-449b-b30e-9f9d64aa2efa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"491bd540-160a-441b-b9cd-7be3b8172d40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc643522-ee29-4b65-bbd6-657ca4e57eb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4fd7edbf-e33c-487b-9462-293753299bec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-223228\" primary control-plane node in \"insufficient-storage-223228\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f2e2a37-7b5a-42ae-a080-c93d236a4556","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdeb871c-7158-4ec1-9c7a-85225da8fb34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"23c2a834-7001-4228-9cb4-1b670c92d520","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-223228 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-223228 --output=json --layout=cluster: exit status 7 (282.725629ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-223228","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-223228","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:56:07.895678  985345 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-223228" does not appear in /home/jenkins/minikube-integration/21643-811293/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-223228 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-223228 --output=json --layout=cluster: exit status 7 (310.778524ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-223228","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-223228","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 07:56:08.205633  985411 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-223228" does not appear in /home/jenkins/minikube-integration/21643-811293/kubeconfig
	E1002 07:56:08.215582  985411 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/insufficient-storage-223228/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-223228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-223228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-223228: (1.888279029s)
--- PASS: TestInsufficientStorage (12.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.759912356 start -p running-upgrade-219859 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1002 07:59:36.680251  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.759912356 start -p running-upgrade-219859 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.183859423s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-219859 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-219859 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.598284797s)
helpers_test.go:175: Cleaning up "running-upgrade-219859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-219859
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-219859: (2.073675515s)
--- PASS: TestRunningBinaryUpgrade (66.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.431156937s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-054944
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-054944: (1.232652857s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-054944 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-054944 status --format={{.Host}}: exit status 7 (75.82074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1002 07:58:24.880910  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m55.592358884s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-054944 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (134.515926ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-054944] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-054944
	    minikube start -p kubernetes-upgrade-054944 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0549442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-054944 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-054944 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.972495187s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-054944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-054944
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-054944: (2.087511036s)
--- PASS: TestKubernetesUpgrade (355.65s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.599626599 start -p missing-upgrade-419290 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.599626599 start -p missing-upgrade-419290 --memory=3072 --driver=docker  --container-runtime=containerd: (1m1.954708069s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-419290
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-419290
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-419290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-419290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.427293546s)
helpers_test.go:175: Cleaning up "missing-upgrade-419290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-419290
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-419290: (2.303340367s)
--- PASS: TestMissingContainerUpgrade (143.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-781413 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-781413 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (103.420933ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-781413] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-781413 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-781413 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.845463537s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-781413 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-781413 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-781413 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (15.902944807s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-781413 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-781413 status -o json: exit status 2 (429.335658ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-781413","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-781413
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-781413: (1.911519901s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-781413 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-781413 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.371884689s)
--- PASS: TestNoKubernetes/serial/Start (8.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-781413 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-781413 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.928466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-781413
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-781413: (1.210004062s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-781413 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-781413 --driver=docker  --container-runtime=containerd: (6.860923355s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-781413 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-781413 "sudo systemctl is-active --quiet service kubelet": exit status 1 (525.655678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.196404375 start -p stopped-upgrade-421771 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.196404375 start -p stopped-upgrade-421771 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.442390985s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.196404375 -p stopped-upgrade-421771 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.196404375 -p stopped-upgrade-421771 stop: (1.252966089s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-421771 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-421771 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (22.252838307s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (56.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-421771
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-421771: (1.57058301s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.57s)

                                                
                                    
x
+
TestPause/serial/Start (49.11s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-567293 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-567293 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (49.112052152s)
--- PASS: TestPause/serial/Start (49.11s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-567293 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-567293 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.490770504s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.50s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-567293 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-567293 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-567293 --output=json --layout=cluster: exit status 2 (308.461156ms)

                                                
                                                
-- stdout --
	{"Name":"pause-567293","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-567293","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-567293 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.96s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-567293 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.96s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-567293 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-567293 --alsologtostderr -v=5: (2.829915213s)
--- PASS: TestPause/serial/DeletePaused (2.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-567293
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-567293: exit status 1 (17.065315ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-567293: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-695746 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-695746 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (184.421451ms)

                                                
                                                
-- stdout --
	* [false-695746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 08:02:23.212398 1024578 out.go:360] Setting OutFile to fd 1 ...
	I1002 08:02:23.212595 1024578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:02:23.212623 1024578 out.go:374] Setting ErrFile to fd 2...
	I1002 08:02:23.212643 1024578 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 08:02:23.212982 1024578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21643-811293/.minikube/bin
	I1002 08:02:23.213427 1024578 out.go:368] Setting JSON to false
	I1002 08:02:23.214355 1024578 start.go:130] hostinfo: {"hostname":"ip-172-31-21-244","uptime":27893,"bootTime":1759364251,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1002 08:02:23.214444 1024578 start.go:140] virtualization:  
	I1002 08:02:23.218180 1024578 out.go:179] * [false-695746] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1002 08:02:23.221025 1024578 out.go:179]   - MINIKUBE_LOCATION=21643
	I1002 08:02:23.221074 1024578 notify.go:220] Checking for updates...
	I1002 08:02:23.224083 1024578 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 08:02:23.226983 1024578 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21643-811293/kubeconfig
	I1002 08:02:23.229963 1024578 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21643-811293/.minikube
	I1002 08:02:23.232892 1024578 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1002 08:02:23.235829 1024578 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 08:02:23.239208 1024578 config.go:182] Loaded profile config "kubernetes-upgrade-054944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1002 08:02:23.239315 1024578 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 08:02:23.267078 1024578 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1002 08:02:23.267211 1024578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 08:02:23.328986 1024578 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-02 08:02:23.319441691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1002 08:02:23.329092 1024578 docker.go:318] overlay module found
	I1002 08:02:23.332215 1024578 out.go:179] * Using the docker driver based on user configuration
	I1002 08:02:23.334968 1024578 start.go:304] selected driver: docker
	I1002 08:02:23.334988 1024578 start.go:924] validating driver "docker" against <nil>
	I1002 08:02:23.335014 1024578 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 08:02:23.338540 1024578 out.go:203] 
	W1002 08:02:23.341493 1024578 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1002 08:02:23.344257 1024578 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-695746 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-695746" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 07:58:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-054944
contexts:
- context:
cluster: kubernetes-upgrade-054944
user: kubernetes-upgrade-054944
name: kubernetes-upgrade-054944
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-054944
user:
client-certificate: /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/kubernetes-upgrade-054944/client.crt
client-key: /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/kubernetes-upgrade-054944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-695746

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-695746"

                                                
                                                
----------------------- debugLogs end: false-695746 [took: 3.451909649s] --------------------------------
helpers_test.go:175: Cleaning up "false-695746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-695746
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-643978 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1002 08:04:36.679758  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-643978 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m0.676001022s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-643978 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [59782dfd-5a5a-42fd-b8ee-076b164e4187] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [59782dfd-5a5a-42fd-b8ee-076b164e4187] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005522269s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-643978 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-643978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-643978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.060597555s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-643978 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-643978 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-643978 --alsologtostderr -v=3: (11.939144276s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-643978 -n old-k8s-version-643978
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-643978 -n old-k8s-version-643978: exit status 7 (72.604233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-643978 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (54.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-643978 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-643978 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (53.749853989s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-643978 -n old-k8s-version-643978
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (54.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r9nft" [5728e742-f85d-4c54-93dc-8c052f566c09] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003677462s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r9nft" [5728e742-f85d-4c54-93dc-8c052f566c09] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003610569s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-643978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-643978 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-643978 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-643978 -n old-k8s-version-643978
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-643978 -n old-k8s-version-643978: exit status 2 (332.221566ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-643978 -n old-k8s-version-643978
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-643978 -n old-k8s-version-643978: exit status 2 (459.978513ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-643978 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-643978 -n old-k8s-version-643978
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-643978 -n old-k8s-version-643978
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-314274 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-314274 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m28.122476145s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-905670 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 08:07:39.753531  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-905670 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m7.193865501s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-314274 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d6025400-6296-4834-9b7b-4c6d2b37655a] Pending
helpers_test.go:352: "busybox" [d6025400-6296-4834-9b7b-4c6d2b37655a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d6025400-6296-4834-9b7b-4c6d2b37655a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003054225s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-314274 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-905670 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [58631f94-e933-441b-b53e-0f4243717d06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 08:08:24.881073  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [58631f94-e933-441b-b53e-0f4243717d06] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003553227s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-905670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-314274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-314274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.003205111s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-314274 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-314274 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-314274 --alsologtostderr -v=3: (12.003460056s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-905670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-905670 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-905670 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-905670 --alsologtostderr -v=3: (11.962000806s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-314274 -n embed-certs-314274
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-314274 -n embed-certs-314274: exit status 7 (67.81378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-314274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (64.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-314274 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-314274 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m3.793007577s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-314274 -n embed-certs-314274
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (64.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-905670 -n no-preload-905670
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-905670 -n no-preload-905670: exit status 7 (82.457642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-905670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-905670 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 08:09:36.680433  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-905670 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.809033811s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-905670 -n no-preload-905670
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vsnd9" [47f05225-083e-4365-8980-938fdced345c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003575714s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l4c6x" [f1dcc616-cf60-4ea8-96c5-7e35d44c7594] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003510471s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vsnd9" [47f05225-083e-4365-8980-938fdced345c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003113396s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-905670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l4c6x" [f1dcc616-cf60-4ea8-96c5-7e35d44c7594] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004111254s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-314274 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-905670 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-905670 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-905670 -n no-preload-905670
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-905670 -n no-preload-905670: exit status 2 (339.166261ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-905670 -n no-preload-905670
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-905670 -n no-preload-905670: exit status 2 (328.722956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-905670 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-905670 -n no-preload-905670
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-905670 -n no-preload-905670
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-314274 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-314274 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-314274 --alsologtostderr -v=1: (1.163823214s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-314274 -n embed-certs-314274
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-314274 -n embed-certs-314274: exit status 2 (495.660888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-314274 -n embed-certs-314274
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-314274 -n embed-certs-314274: exit status 2 (384.114228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-314274 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-314274 --alsologtostderr -v=1: (1.230746065s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-314274 -n embed-certs-314274
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-314274 -n embed-certs-314274
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-367084 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-367084 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m28.898287165s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-485790 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 08:10:15.820072  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:15.826357  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:15.837900  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:15.859236  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:15.900897  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:15.984933  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:16.152001  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:16.473714  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:17.115918  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:18.397533  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:20.959730  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:26.081658  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:10:36.322968  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-485790 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (46.334695766s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-485790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-485790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.026599257s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-485790 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-485790 --alsologtostderr -v=3: (1.235929922s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-485790 -n newest-cni-485790
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-485790 -n newest-cni-485790: exit status 7 (72.676186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-485790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-485790 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1002 08:10:56.804445  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-485790 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (16.234643253s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-485790 -n newest-cni-485790
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-485790 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-485790 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-485790 -n newest-cni-485790
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-485790 -n newest-cni-485790: exit status 2 (344.004112ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-485790 -n newest-cni-485790
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-485790 -n newest-cni-485790: exit status 2 (332.880301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-485790 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-485790 -n newest-cni-485790
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-485790 -n newest-cni-485790
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m26.322986115s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-367084 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [831db929-5b62-4a6e-98a6-415dd70057f3] Pending
helpers_test.go:352: "busybox" [831db929-5b62-4a6e-98a6-415dd70057f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [831db929-5b62-4a6e-98a6-415dd70057f3] Running
E1002 08:11:37.765798  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004231235s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-367084 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-367084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-367084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.194323939s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-367084 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-367084 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-367084 --alsologtostderr -v=3: (12.28059219s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084: exit status 7 (97.401764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-367084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-367084 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-367084 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (57.845870422s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-695746 "pgrep -a kubelet"
I1002 08:12:43.534716  813155 config.go:182] Loaded profile config "auto-695746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-695746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pzv4b" [6020609e-c9d7-467d-8a94-5034ce58fdea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pzv4b" [6020609e-c9d7-467d-8a94-5034ce58fdea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004286655s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n9m7x" [dba8ce9a-f0ce-48fa-bd84-d41ec97c3fae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003497138s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-695746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-n9m7x" [dba8ce9a-f0ce-48fa-bd84-d41ec97c3fae] Running
E1002 08:12:59.687733  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003786473s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-367084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-367084 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-367084 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-367084 --alsologtostderr -v=1: (1.032836932s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084: exit status 2 (455.742895ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084: exit status 2 (439.422772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-367084 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-367084 --alsologtostderr -v=1: (1.006868279s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-367084 -n default-k8s-diff-port-367084
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.27s)
E1002 08:18:04.285460  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/auto-695746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m10.006236533s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1002 08:13:24.766591  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:24.772926  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:24.785385  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:24.806720  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:24.848050  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:24.881109  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/addons-110926/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:24.930162  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:25.092372  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:25.414453  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:26.056137  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:27.338257  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:29.900099  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:35.022169  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:13:45.263600  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:14:05.745776  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (58.156484836s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9shdp" [96483dbd-405f-45a2-99cb-e3e7e5533906] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003517831s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-tsqnb" [92d8317b-3bb3-447d-b341-8bb72da698c5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004025283s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-695746 "pgrep -a kubelet"
I1002 08:14:22.994010  813155 config.go:182] Loaded profile config "calico-695746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-695746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2p9wm" [36b49b9e-da3c-4807-90d3-e6774a2c086a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2p9wm" [36b49b9e-da3c-4807-90d3-e6774a2c086a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003417416s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-695746 "pgrep -a kubelet"
I1002 08:14:27.203855  813155 config.go:182] Loaded profile config "flannel-695746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-695746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bv4x5" [1b78989d-28a7-499b-9b84-bdde0d0a45a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bv4x5" [1b78989d-28a7-499b-9b84-bdde0d0a45a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00367015s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-695746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-695746 exec deployment/netcat -- nslookup kubernetes.default
E1002 08:14:36.679901  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/functional-630775/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.226592491s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1002 08:15:15.819633  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:15:43.529648  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/old-k8s-version-643978/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m25.373124549s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-695746 "pgrep -a kubelet"
I1002 08:16:04.634849  813155 config.go:182] Loaded profile config "custom-flannel-695746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-695746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xfklx" [1b5ab713-aafd-47c9-863b-c9570c89e97e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xfklx" [1b5ab713-aafd-47c9-863b-c9570c89e97e] Running
E1002 08:16:08.629771  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/no-preload-905670/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003840793s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-695746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-vlpl2" [5a5128d9-e57c-4494-9b0e-686c85f3d76f] Running
E1002 08:16:31.375863  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:16:31.382814  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:16:31.394165  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:16:31.415625  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:16:31.457106  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:16:31.544510  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:16:31.706620  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 08:16:32.029754  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003858655s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1002 08:16:36.514981  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (54.116630462s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-695746 "pgrep -a kubelet"
I1002 08:16:37.274314  813155 config.go:182] Loaded profile config "kindnet-695746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-695746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-78gch" [7e9030ed-0ec6-4303-aee7-3cfa56afb826] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 08:16:41.637057  813155 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/default-k8s-diff-port-367084/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-78gch" [7e9030ed-0ec6-4303-aee7-3cfa56afb826] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004542103s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-695746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-695746 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.161246955s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-695746 "pgrep -a kubelet"
I1002 08:17:29.412671  813155 config.go:182] Loaded profile config "bridge-695746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-695746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kzmv5" [af6d8206-e97d-4535-9acd-fd51a728a676] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kzmv5" [af6d8206-e97d-4535-9acd-fd51a728a676] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003671368s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-695746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-695746 "pgrep -a kubelet"
I1002 08:18:04.917981  813155 config.go:182] Loaded profile config "enable-default-cni-695746": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-695746 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kzm9h" [c38ec630-9c30-458b-b656-cfd0914edd68] Pending
helpers_test.go:352: "netcat-cd4db9dbf-kzm9h" [c38ec630-9c30-458b-b656-cfd0914edd68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003150784s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-695746 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-695746 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    

Test skip (30/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-533728 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-533728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-533728
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-023718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-023718
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-695746 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-695746" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 07:58:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-054944
contexts:
- context:
cluster: kubernetes-upgrade-054944
user: kubernetes-upgrade-054944
name: kubernetes-upgrade-054944
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-054944
user:
client-certificate: /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/kubernetes-upgrade-054944/client.crt
client-key: /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/kubernetes-upgrade-054944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-695746

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-695746"

                                                
                                                
----------------------- debugLogs end: kubenet-695746 [took: 3.49588696s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-695746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-695746
--- SKIP: TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-695746 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-695746" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21643-811293/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 02 Oct 2025 07:58:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-054944
contexts:
- context:
cluster: kubernetes-upgrade-054944
user: kubernetes-upgrade-054944
name: kubernetes-upgrade-054944
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-054944
user:
client-certificate: /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/kubernetes-upgrade-054944/client.crt
client-key: /home/jenkins/minikube-integration/21643-811293/.minikube/profiles/kubernetes-upgrade-054944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-695746

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-695746" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-695746"

                                                
                                                
----------------------- debugLogs end: cilium-695746 [took: 3.89517012s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-695746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-695746
--- SKIP: TestNetworkPlugins/group/cilium (4.04s)

                                                
                                    
Copied to clipboard