Test Report: Docker_Linux_crio 19910

                    
                      0805a48cef53763875eefc0e18e5d59dcaccd8a0:2024-11-05:36955
                    
                

Test fail (1/19)

Order failed test Duration
40 TestAddons/parallel/CSI 7200.061
x
+
TestAddons/parallel/CSI (7200.061s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1105 17:43:53.155858  378976 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1105 17:43:53.160545  378976 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1105 17:43:53.160573  378976 kapi.go:107] duration metric: took 4.745856ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.754831ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-335216 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/11/05 17:44:01 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-335216 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [529e5d45-8044-4763-b5b1-cdde950349b3] Pending
helpers_test.go:344: "task-pv-pod" [529e5d45-8044-4763-b5b1-cdde950349b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [529e5d45-8044-4763-b5b1-cdde950349b3] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00422728s
addons_test.go:511: (dbg) Run:  kubectl --context addons-335216 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-335216 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-335216 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-335216 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-335216 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-335216 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-335216 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-335216 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2b0585ef-8725-44ca-aba1-bd7737a1af78] Pending
helpers_test.go:344: "task-pv-pod-restore" [2b0585ef-8725-44ca-aba1-bd7737a1af78] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod-restore" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:548: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:548: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335216 -n addons-335216
addons_test.go:548: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-11-05 17:50:45.951576626 +0000 UTC m=+594.373389767
addons_test.go:548: (dbg) Run:  kubectl --context addons-335216 describe po task-pv-pod-restore -n default
addons_test.go:548: (dbg) kubectl --context addons-335216 describe po task-pv-pod-restore -n default:
Name:             task-pv-pod-restore
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-335216/192.168.49.2
Start Time:       Tue, 05 Nov 2024 17:44:45 +0000
Labels:           app=task-pv-pod-restore
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82jtj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-82jtj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m1s                 default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-335216
Warning  Failed     5m23s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    95s (x5 over 5m22s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     95s (x5 over 5m22s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    83s (x4 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     10s (x4 over 5m23s)  kubelet            Error: ErrImagePull
Warning  Failed     10s (x3 over 4m15s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
addons_test.go:548: (dbg) Run:  kubectl --context addons-335216 logs task-pv-pod-restore -n default
addons_test.go:548: (dbg) Non-zero exit: kubectl --context addons-335216 logs task-pv-pod-restore -n default: exit status 1 (72.243678ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:548: kubectl --context addons-335216 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:549: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-335216
helpers_test.go:235: (dbg) docker inspect addons-335216:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1668d24de314ac79677a47277c53ac9aa25d0d78ba9abe4e5de2d7728639e42",
	        "Created": "2024-11-05T17:41:32.100088683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 381026,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-11-05T17:41:32.210243823Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:60a5e834f9e5a8de0076d14f95aa6ebfc76d479081a97aa94e6820ea1e903812",
	        "ResolvConfPath": "/var/lib/docker/containers/b1668d24de314ac79677a47277c53ac9aa25d0d78ba9abe4e5de2d7728639e42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1668d24de314ac79677a47277c53ac9aa25d0d78ba9abe4e5de2d7728639e42/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1668d24de314ac79677a47277c53ac9aa25d0d78ba9abe4e5de2d7728639e42/hosts",
	        "LogPath": "/var/lib/docker/containers/b1668d24de314ac79677a47277c53ac9aa25d0d78ba9abe4e5de2d7728639e42/b1668d24de314ac79677a47277c53ac9aa25d0d78ba9abe4e5de2d7728639e42-json.log",
	        "Name": "/addons-335216",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-335216:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-335216",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7fc4cea2103479ce672e2763d721cd1ce2c9191026bca5b6b1d14df40b4ee6f2-init/diff:/var/lib/docker/overlay2/f59c1560c7573b4475a2212bcff00d51b432d427f12d86262cc508ec5a671fac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7fc4cea2103479ce672e2763d721cd1ce2c9191026bca5b6b1d14df40b4ee6f2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7fc4cea2103479ce672e2763d721cd1ce2c9191026bca5b6b1d14df40b4ee6f2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7fc4cea2103479ce672e2763d721cd1ce2c9191026bca5b6b1d14df40b4ee6f2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-335216",
	                "Source": "/var/lib/docker/volumes/addons-335216/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-335216",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-335216",
	                "name.minikube.sigs.k8s.io": "addons-335216",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "35de1ce755fff8edd46476c42bf2454baebb96aa53319997f4d75b76c2f8f5f5",
	            "SandboxKey": "/var/run/docker/netns/35de1ce755ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-335216": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "4981db56ec79abd9dfb3b1f5f0f89b363e3a70ffbdef121f04cac31489c32182",
	                    "EndpointID": "31bc27b007f9387ed329fc4cac617200a65470a9bdc9106d386612d0286ad221",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-335216",
	                        "b1668d24de31"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-335216 -n addons-335216
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-335216 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-335216 logs -n 25: (1.13923965s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-989945              | download-only-989945   | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| start   | --download-only -p                   | download-docker-194835 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | download-docker-194835               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-194835            | download-docker-194835 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| start   | --download-only -p                   | binary-mirror-699458   | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | binary-mirror-699458                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41791               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-699458              | binary-mirror-699458   | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| addons  | disable dashboard -p                 | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | addons-335216                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | addons-335216                        |                        |         |         |                     |                     |
	| start   | -p addons-335216 --wait=true         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:43 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:43 UTC | 05 Nov 24 17:43 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:43 UTC | 05 Nov 24 17:43 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:43 UTC | 05 Nov 24 17:43 UTC |
	|         | -p addons-335216                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:43 UTC | 05 Nov 24 17:43 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:43 UTC | 05 Nov 24 17:44 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-335216 ip                     | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-335216 addons                 | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC | 05 Nov 24 17:44 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ssh     | addons-335216 ssh curl -s            | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-335216 ip                     | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-335216 addons                 | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-335216 addons                 | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:46 UTC | 05 Nov 24 17:46 UTC |
	|         | disable cloud-spanner                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-335216 addons disable         | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:47 UTC | 05 Nov 24 17:48 UTC |
	|         | storage-provisioner-rancher          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-335216 addons                 | addons-335216          | jenkins | v1.34.0 | 05 Nov 24 17:49 UTC | 05 Nov 24 17:49 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:41:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:41:08.209717  380284 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:41:08.209847  380284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:08.209856  380284 out.go:358] Setting ErrFile to fd 2...
	I1105 17:41:08.209860  380284 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:08.210046  380284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-372139/.minikube/bin
	I1105 17:41:08.210755  380284 out.go:352] Setting JSON to false
	I1105 17:41:08.211764  380284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5019,"bootTime":1730823449,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 17:41:08.211886  380284 start.go:139] virtualization: kvm guest
	I1105 17:41:08.214118  380284 out.go:177] * [addons-335216] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 17:41:08.215430  380284 out.go:177]   - MINIKUBE_LOCATION=19910
	I1105 17:41:08.215444  380284 notify.go:220] Checking for updates...
	I1105 17:41:08.217721  380284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:41:08.218933  380284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19910-372139/kubeconfig
	I1105 17:41:08.220259  380284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-372139/.minikube
	I1105 17:41:08.221534  380284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1105 17:41:08.222751  380284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1105 17:41:08.224026  380284 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:41:08.245579  380284 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 17:41:08.245676  380284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:41:08.289617  380284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-11-05 17:41:08.280556238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1105 17:41:08.289760  380284 docker.go:318] overlay module found
	I1105 17:41:08.291789  380284 out.go:177] * Using the docker driver based on user configuration
	I1105 17:41:08.293585  380284 start.go:297] selected driver: docker
	I1105 17:41:08.293606  380284 start.go:901] validating driver "docker" against <nil>
	I1105 17:41:08.293621  380284 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1105 17:41:08.294443  380284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:41:08.340428  380284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-11-05 17:41:08.331794825 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1105 17:41:08.340625  380284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:41:08.340885  380284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:41:08.342747  380284 out.go:177] * Using Docker driver with root privileges
	I1105 17:41:08.344063  380284 cni.go:84] Creating CNI manager for ""
	I1105 17:41:08.344146  380284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:41:08.344162  380284 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 17:41:08.344243  380284 start.go:340] cluster config:
	{Name:addons-335216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-335216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:41:08.345537  380284 out.go:177] * Starting "addons-335216" primary control-plane node in "addons-335216" cluster
	I1105 17:41:08.346736  380284 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 17:41:08.347858  380284 out.go:177] * Pulling base image v0.0.45-1730282848-19883 ...
	I1105 17:41:08.348979  380284 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:41:08.349039  380284 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1105 17:41:08.349039  380284 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 17:41:08.349059  380284 cache.go:56] Caching tarball of preloaded images
	I1105 17:41:08.349148  380284 preload.go:172] Found /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1105 17:41:08.349163  380284 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1105 17:41:08.349487  380284 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/config.json ...
	I1105 17:41:08.349514  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/config.json: {Name:mk9d7557508034a5973fd110c2ebe21c312ce3e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:08.364893  380284 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:41:08.365070  380284 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory
	I1105 17:41:08.365090  380284 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory, skipping pull
	I1105 17:41:08.365096  380284 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 exists in cache, skipping pull
	I1105 17:41:08.365104  380284 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 as a tarball
	I1105 17:41:08.365111  380284 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 from local cache
	I1105 17:41:20.182188  380284 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 from cached tarball
	I1105 17:41:20.182228  380284 cache.go:194] Successfully downloaded all kic artifacts
	I1105 17:41:20.182285  380284 start.go:360] acquireMachinesLock for addons-335216: {Name:mk5495ab54f950d6c2ab26b4f977a7b159f40545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1105 17:41:20.182401  380284 start.go:364] duration metric: took 90.548µs to acquireMachinesLock for "addons-335216"
	I1105 17:41:20.182428  380284 start.go:93] Provisioning new machine with config: &{Name:addons-335216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-335216 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:41:20.182535  380284 start.go:125] createHost starting for "" (driver="docker")
	I1105 17:41:20.184378  380284 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1105 17:41:20.184595  380284 start.go:159] libmachine.API.Create for "addons-335216" (driver="docker")
	I1105 17:41:20.184639  380284 client.go:168] LocalClient.Create starting
	I1105 17:41:20.184742  380284 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca.pem
	I1105 17:41:20.311769  380284 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/cert.pem
	I1105 17:41:20.421858  380284 cli_runner.go:164] Run: docker network inspect addons-335216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1105 17:41:20.437660  380284 cli_runner.go:211] docker network inspect addons-335216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1105 17:41:20.437741  380284 network_create.go:284] running [docker network inspect addons-335216] to gather additional debugging logs...
	I1105 17:41:20.437761  380284 cli_runner.go:164] Run: docker network inspect addons-335216
	W1105 17:41:20.452471  380284 cli_runner.go:211] docker network inspect addons-335216 returned with exit code 1
	I1105 17:41:20.452504  380284 network_create.go:287] error running [docker network inspect addons-335216]: docker network inspect addons-335216: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-335216 not found
	I1105 17:41:20.452518  380284 network_create.go:289] output of [docker network inspect addons-335216]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-335216 not found
	
	** /stderr **
	I1105 17:41:20.452622  380284 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 17:41:20.468287  380284 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c53f10}
	I1105 17:41:20.468341  380284 network_create.go:124] attempt to create docker network addons-335216 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1105 17:41:20.468404  380284 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-335216 addons-335216
	I1105 17:41:20.526469  380284 network_create.go:108] docker network addons-335216 192.168.49.0/24 created
	I1105 17:41:20.526510  380284 kic.go:121] calculated static IP "192.168.49.2" for the "addons-335216" container
	I1105 17:41:20.526576  380284 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1105 17:41:20.541436  380284 cli_runner.go:164] Run: docker volume create addons-335216 --label name.minikube.sigs.k8s.io=addons-335216 --label created_by.minikube.sigs.k8s.io=true
	I1105 17:41:20.558205  380284 oci.go:103] Successfully created a docker volume addons-335216
	I1105 17:41:20.558310  380284 cli_runner.go:164] Run: docker run --rm --name addons-335216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-335216 --entrypoint /usr/bin/test -v addons-335216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -d /var/lib
	I1105 17:41:27.557352  380284 cli_runner.go:217] Completed: docker run --rm --name addons-335216-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-335216 --entrypoint /usr/bin/test -v addons-335216:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -d /var/lib: (6.998999908s)
	I1105 17:41:27.557392  380284 oci.go:107] Successfully prepared a docker volume addons-335216
	I1105 17:41:27.557415  380284 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:41:27.557443  380284 kic.go:194] Starting extracting preloaded images to volume ...
	I1105 17:41:27.557514  380284 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-335216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -I lz4 -xf /preloaded.tar -C /extractDir
	I1105 17:41:32.037798  380284 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-335216:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.48023345s)
	I1105 17:41:32.037832  380284 kic.go:203] duration metric: took 4.480386095s to extract preloaded images to volume ...
	W1105 17:41:32.038000  380284 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1105 17:41:32.038126  380284 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1105 17:41:32.085484  380284 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-335216 --name addons-335216 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-335216 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-335216 --network addons-335216 --ip 192.168.49.2 --volume addons-335216:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4
	I1105 17:41:32.380531  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Running}}
	I1105 17:41:32.398333  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:32.417254  380284 cli_runner.go:164] Run: docker exec addons-335216 stat /var/lib/dpkg/alternatives/iptables
	I1105 17:41:32.460094  380284 oci.go:144] the created container "addons-335216" has a running status.
	I1105 17:41:32.460133  380284 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa...
	I1105 17:41:32.779263  380284 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1105 17:41:32.802612  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:32.820240  380284 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1105 17:41:32.820275  380284 kic_runner.go:114] Args: [docker exec --privileged addons-335216 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1105 17:41:32.862147  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:32.882660  380284 machine.go:93] provisionDockerMachine start ...
	I1105 17:41:32.882756  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:32.905703  380284 main.go:141] libmachine: Using SSH client type: native
	I1105 17:41:32.905981  380284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1105 17:41:32.906004  380284 main.go:141] libmachine: About to run SSH command:
	hostname
	I1105 17:41:33.064844  380284 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-335216
	
	I1105 17:41:33.064894  380284 ubuntu.go:169] provisioning hostname "addons-335216"
	I1105 17:41:33.064985  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:33.084170  380284 main.go:141] libmachine: Using SSH client type: native
	I1105 17:41:33.084436  380284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1105 17:41:33.084458  380284 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-335216 && echo "addons-335216" | sudo tee /etc/hostname
	I1105 17:41:33.224857  380284 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-335216
	
	I1105 17:41:33.224941  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:33.244008  380284 main.go:141] libmachine: Using SSH client type: native
	I1105 17:41:33.244266  380284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1105 17:41:33.244293  380284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-335216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-335216/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-335216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1105 17:41:33.369112  380284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1105 17:41:33.369141  380284 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19910-372139/.minikube CaCertPath:/home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19910-372139/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19910-372139/.minikube}
	I1105 17:41:33.369176  380284 ubuntu.go:177] setting up certificates
	I1105 17:41:33.369208  380284 provision.go:84] configureAuth start
	I1105 17:41:33.369294  380284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-335216
	I1105 17:41:33.385586  380284 provision.go:143] copyHostCerts
	I1105 17:41:33.385665  380284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19910-372139/.minikube/ca.pem (1082 bytes)
	I1105 17:41:33.385773  380284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19910-372139/.minikube/cert.pem (1123 bytes)
	I1105 17:41:33.385832  380284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19910-372139/.minikube/key.pem (1675 bytes)
	I1105 17:41:33.385887  380284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19910-372139/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca-key.pem org=jenkins.addons-335216 san=[127.0.0.1 192.168.49.2 addons-335216 localhost minikube]
	I1105 17:41:33.543029  380284 provision.go:177] copyRemoteCerts
	I1105 17:41:33.543090  380284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1105 17:41:33.543126  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:33.559991  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:33.653600  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1105 17:41:33.677124  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1105 17:41:33.698803  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1105 17:41:33.719683  380284 provision.go:87] duration metric: took 350.45441ms to configureAuth
	I1105 17:41:33.719715  380284 ubuntu.go:193] setting minikube options for container-runtime
	I1105 17:41:33.719904  380284 config.go:182] Loaded profile config "addons-335216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:41:33.720044  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:33.737268  380284 main.go:141] libmachine: Using SSH client type: native
	I1105 17:41:33.737484  380284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866160] 0x868e40 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1105 17:41:33.737512  380284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1105 17:41:33.947779  380284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1105 17:41:33.947812  380284 machine.go:96] duration metric: took 1.065129035s to provisionDockerMachine
	I1105 17:41:33.947824  380284 client.go:171] duration metric: took 13.763177236s to LocalClient.Create
	I1105 17:41:33.947844  380284 start.go:167] duration metric: took 13.763247944s to libmachine.API.Create "addons-335216"
	I1105 17:41:33.947869  380284 start.go:293] postStartSetup for "addons-335216" (driver="docker")
	I1105 17:41:33.947882  380284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1105 17:41:33.947943  380284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1105 17:41:33.947991  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:33.965229  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:34.057901  380284 ssh_runner.go:195] Run: cat /etc/os-release
	I1105 17:41:34.061077  380284 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1105 17:41:34.061114  380284 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1105 17:41:34.061126  380284 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1105 17:41:34.061136  380284 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1105 17:41:34.061153  380284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-372139/.minikube/addons for local assets ...
	I1105 17:41:34.061231  380284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19910-372139/.minikube/files for local assets ...
	I1105 17:41:34.061263  380284 start.go:296] duration metric: took 113.386597ms for postStartSetup
	I1105 17:41:34.061609  380284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-335216
	I1105 17:41:34.077827  380284 profile.go:143] Saving config to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/config.json ...
	I1105 17:41:34.078075  380284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1105 17:41:34.078125  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:34.094942  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:34.181804  380284 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1105 17:41:34.186052  380284 start.go:128] duration metric: took 14.003499867s to createHost
	I1105 17:41:34.186082  380284 start.go:83] releasing machines lock for "addons-335216", held for 14.003669713s
	I1105 17:41:34.186147  380284 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-335216
	I1105 17:41:34.203030  380284 ssh_runner.go:195] Run: cat /version.json
	I1105 17:41:34.203078  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:34.203114  380284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1105 17:41:34.203182  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:34.220350  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:34.220839  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:34.384858  380284 ssh_runner.go:195] Run: systemctl --version
	I1105 17:41:34.389182  380284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1105 17:41:34.528490  380284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1105 17:41:34.532701  380284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:41:34.550228  380284 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1105 17:41:34.550328  380284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1105 17:41:34.575996  380284 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1105 17:41:34.576022  380284 start.go:495] detecting cgroup driver to use...
	I1105 17:41:34.576057  380284 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1105 17:41:34.576109  380284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1105 17:41:34.590690  380284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1105 17:41:34.601520  380284 docker.go:217] disabling cri-docker service (if available) ...
	I1105 17:41:34.601583  380284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1105 17:41:34.614165  380284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1105 17:41:34.628017  380284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1105 17:41:34.706763  380284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1105 17:41:34.783301  380284 docker.go:233] disabling docker service ...
	I1105 17:41:34.783372  380284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1105 17:41:34.802802  380284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1105 17:41:34.813875  380284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1105 17:41:34.894667  380284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1105 17:41:34.977020  380284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1105 17:41:34.987389  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1105 17:41:35.002239  380284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1105 17:41:35.002302  380284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:41:35.011019  380284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1105 17:41:35.011075  380284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:41:35.020237  380284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:41:35.029123  380284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:41:35.038260  380284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1105 17:41:35.046551  380284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:41:35.055576  380284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:41:35.070696  380284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1105 17:41:35.080061  380284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1105 17:41:35.087859  380284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1105 17:41:35.095344  380284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:41:35.164419  380284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1105 17:41:35.272455  380284 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1105 17:41:35.272535  380284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1105 17:41:35.275963  380284 start.go:563] Will wait 60s for crictl version
	I1105 17:41:35.276015  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:41:35.279036  380284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1105 17:41:35.313538  380284 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1105 17:41:35.313639  380284 ssh_runner.go:195] Run: crio --version
	I1105 17:41:35.350193  380284 ssh_runner.go:195] Run: crio --version
	I1105 17:41:35.386817  380284 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1105 17:41:35.388001  380284 cli_runner.go:164] Run: docker network inspect addons-335216 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1105 17:41:35.404520  380284 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1105 17:41:35.407916  380284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:41:35.418133  380284 kubeadm.go:883] updating cluster {Name:addons-335216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-335216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1105 17:41:35.418305  380284 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1105 17:41:35.418355  380284 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:41:35.485593  380284 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:41:35.485617  380284 crio.go:433] Images already preloaded, skipping extraction
	I1105 17:41:35.485670  380284 ssh_runner.go:195] Run: sudo crictl images --output json
	I1105 17:41:35.517260  380284 crio.go:514] all images are preloaded for cri-o runtime.
	I1105 17:41:35.517284  380284 cache_images.go:84] Images are preloaded, skipping loading
	I1105 17:41:35.517292  380284 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1105 17:41:35.517397  380284 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-335216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-335216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1105 17:41:35.517499  380284 ssh_runner.go:195] Run: crio config
	I1105 17:41:35.559913  380284 cni.go:84] Creating CNI manager for ""
	I1105 17:41:35.559934  380284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:41:35.559944  380284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1105 17:41:35.559967  380284 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-335216 NodeName:addons-335216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1105 17:41:35.560109  380284 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-335216"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1105 17:41:35.560175  380284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1105 17:41:35.568358  380284 binaries.go:44] Found k8s binaries, skipping transfer
	I1105 17:41:35.568416  380284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1105 17:41:35.576132  380284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1105 17:41:35.592820  380284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1105 17:41:35.609221  380284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1105 17:41:35.625522  380284 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1105 17:41:35.628651  380284 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1105 17:41:35.638629  380284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:41:35.713928  380284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:41:35.727988  380284 certs.go:68] Setting up /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216 for IP: 192.168.49.2
	I1105 17:41:35.728016  380284 certs.go:194] generating shared ca certs ...
	I1105 17:41:35.728038  380284 certs.go:226] acquiring lock for ca certs: {Name:mkea8b3bf1848bd036732cb0ad7912338d0cb4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:35.728170  380284 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19910-372139/.minikube/ca.key
	I1105 17:41:35.852030  380284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-372139/.minikube/ca.crt ...
	I1105 17:41:35.852070  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/ca.crt: {Name:mk23a8f6a990c8a0e080820400aa7c59fcafad1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:35.852293  380284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-372139/.minikube/ca.key ...
	I1105 17:41:35.852309  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/ca.key: {Name:mkff1bd53a91055c1994054016f7e53ea455ba18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:35.852417  380284 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19910-372139/.minikube/proxy-client-ca.key
	I1105 17:41:36.429670  380284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-372139/.minikube/proxy-client-ca.crt ...
	I1105 17:41:36.429709  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/proxy-client-ca.crt: {Name:mk7cfaadea349a67874a8e4daaf4d25083431ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.429907  380284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-372139/.minikube/proxy-client-ca.key ...
	I1105 17:41:36.429924  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/proxy-client-ca.key: {Name:mke98a013bdbbed17adda85e277368307944b5bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.430021  380284 certs.go:256] generating profile certs ...
	I1105 17:41:36.430108  380284 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/client.key
	I1105 17:41:36.430128  380284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/client.crt with IP's: []
	I1105 17:41:36.587120  380284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/client.crt ...
	I1105 17:41:36.587159  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/client.crt: {Name:mka471ce3ee2a98d5cf534cea7fb99b75881fc7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.587368  380284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/client.key ...
	I1105 17:41:36.587385  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/client.key: {Name:mka90d5e2d1000261fe263ed1e5e5ac470d7b06e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.587497  380284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.key.b21ebd98
	I1105 17:41:36.587524  380284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.crt.b21ebd98 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1105 17:41:36.753903  380284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.crt.b21ebd98 ...
	I1105 17:41:36.753945  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.crt.b21ebd98: {Name:mk9e21eafc8561d853e982bbac388043ef2a14d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.754141  380284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.key.b21ebd98 ...
	I1105 17:41:36.754160  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.key.b21ebd98: {Name:mk6ee076401c46a22fbdcee8625181290d11e736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.754269  380284 certs.go:381] copying /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.crt.b21ebd98 -> /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.crt
	I1105 17:41:36.754367  380284 certs.go:385] copying /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.key.b21ebd98 -> /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.key
	I1105 17:41:36.754441  380284 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.key
	I1105 17:41:36.754470  380284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.crt with IP's: []
	I1105 17:41:36.915062  380284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.crt ...
	I1105 17:41:36.915107  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.crt: {Name:mk4215ebc7f14db481a9a1b2b9504f693948de1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.915321  380284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.key ...
	I1105 17:41:36.915342  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.key: {Name:mk7c77358bf476aca2eb8b58638ea632b9a452aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:36.915547  380284 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca-key.pem (1679 bytes)
	I1105 17:41:36.915598  380284 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/ca.pem (1082 bytes)
	I1105 17:41:36.915637  380284 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/cert.pem (1123 bytes)
	I1105 17:41:36.915685  380284 certs.go:484] found cert: /home/jenkins/minikube-integration/19910-372139/.minikube/certs/key.pem (1675 bytes)
	I1105 17:41:36.916351  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1105 17:41:36.940491  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1105 17:41:36.963752  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1105 17:41:36.987253  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1105 17:41:37.009967  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1105 17:41:37.033134  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1105 17:41:37.057204  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1105 17:41:37.080641  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/profiles/addons-335216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1105 17:41:37.103212  380284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19910-372139/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1105 17:41:37.126280  380284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1105 17:41:37.142821  380284 ssh_runner.go:195] Run: openssl version
	I1105 17:41:37.148038  380284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1105 17:41:37.158271  380284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:41:37.162172  380284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  5 17:41 /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:41:37.162242  380284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1105 17:41:37.169116  380284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1105 17:41:37.178552  380284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1105 17:41:37.181843  380284 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1105 17:41:37.181896  380284 kubeadm.go:392] StartCluster: {Name:addons-335216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-335216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:41:37.181984  380284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1105 17:41:37.182031  380284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1105 17:41:37.216320  380284 cri.go:89] found id: ""
	I1105 17:41:37.216381  380284 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1105 17:41:37.224900  380284 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1105 17:41:37.233697  380284 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1105 17:41:37.233761  380284 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1105 17:41:37.243491  380284 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1105 17:41:37.243515  380284 kubeadm.go:157] found existing configuration files:
	
	I1105 17:41:37.243583  380284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1105 17:41:37.253185  380284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1105 17:41:37.253259  380284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1105 17:41:37.261666  380284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1105 17:41:37.269516  380284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1105 17:41:37.269589  380284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1105 17:41:37.277702  380284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1105 17:41:37.285776  380284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1105 17:41:37.285832  380284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1105 17:41:37.293617  380284 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1105 17:41:37.302646  380284 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1105 17:41:37.302710  380284 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1105 17:41:37.310665  380284 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1105 17:41:37.364981  380284 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-gcp\n", err: exit status 1
	I1105 17:41:37.414983  380284 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1105 17:41:45.936542  380284 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1105 17:41:45.936599  380284 kubeadm.go:310] [preflight] Running pre-flight checks
	I1105 17:41:45.936703  380284 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1105 17:41:45.936769  380284 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-gcp
	I1105 17:41:45.936801  380284 kubeadm.go:310] OS: Linux
	I1105 17:41:45.936870  380284 kubeadm.go:310] CGROUPS_CPU: enabled
	I1105 17:41:45.936962  380284 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1105 17:41:45.937055  380284 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1105 17:41:45.937117  380284 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1105 17:41:45.937180  380284 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1105 17:41:45.937239  380284 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1105 17:41:45.937298  380284 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1105 17:41:45.937376  380284 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1105 17:41:45.937443  380284 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1105 17:41:45.937553  380284 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1105 17:41:45.937678  380284 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1105 17:41:45.937821  380284 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1105 17:41:45.937926  380284 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1105 17:41:45.940489  380284 out.go:235]   - Generating certificates and keys ...
	I1105 17:41:45.940605  380284 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1105 17:41:45.940673  380284 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1105 17:41:45.940758  380284 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1105 17:41:45.940817  380284 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1105 17:41:45.940867  380284 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1105 17:41:45.940950  380284 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1105 17:41:45.941036  380284 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1105 17:41:45.941139  380284 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-335216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1105 17:41:45.941189  380284 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1105 17:41:45.941292  380284 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-335216 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1105 17:41:45.941358  380284 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1105 17:41:45.941444  380284 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1105 17:41:45.941512  380284 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1105 17:41:45.941600  380284 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1105 17:41:45.941649  380284 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1105 17:41:45.941719  380284 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1105 17:41:45.941786  380284 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1105 17:41:45.941848  380284 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1105 17:41:45.941902  380284 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1105 17:41:45.941972  380284 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1105 17:41:45.942028  380284 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1105 17:41:45.943258  380284 out.go:235]   - Booting up control plane ...
	I1105 17:41:45.943337  380284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1105 17:41:45.943415  380284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1105 17:41:45.943498  380284 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1105 17:41:45.943640  380284 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1105 17:41:45.943724  380284 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1105 17:41:45.943788  380284 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1105 17:41:45.943946  380284 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1105 17:41:45.944072  380284 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1105 17:41:45.944156  380284 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.629316ms
	I1105 17:41:45.944219  380284 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1105 17:41:45.944274  380284 kubeadm.go:310] [api-check] The API server is healthy after 4.50182012s
	I1105 17:41:45.944368  380284 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1105 17:41:45.944521  380284 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1105 17:41:45.944576  380284 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1105 17:41:45.944727  380284 kubeadm.go:310] [mark-control-plane] Marking the node addons-335216 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1105 17:41:45.944779  380284 kubeadm.go:310] [bootstrap-token] Using token: aiczkk.j3d9xiarhtn86cs5
	I1105 17:41:45.946072  380284 out.go:235]   - Configuring RBAC rules ...
	I1105 17:41:45.946172  380284 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1105 17:41:45.946254  380284 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1105 17:41:45.946373  380284 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1105 17:41:45.946490  380284 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1105 17:41:45.946618  380284 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1105 17:41:45.946730  380284 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1105 17:41:45.946869  380284 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1105 17:41:45.946908  380284 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1105 17:41:45.946951  380284 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1105 17:41:45.946956  380284 kubeadm.go:310] 
	I1105 17:41:45.947005  380284 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1105 17:41:45.947015  380284 kubeadm.go:310] 
	I1105 17:41:45.947077  380284 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1105 17:41:45.947083  380284 kubeadm.go:310] 
	I1105 17:41:45.947104  380284 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1105 17:41:45.947157  380284 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1105 17:41:45.947207  380284 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1105 17:41:45.947214  380284 kubeadm.go:310] 
	I1105 17:41:45.947263  380284 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1105 17:41:45.947272  380284 kubeadm.go:310] 
	I1105 17:41:45.947317  380284 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1105 17:41:45.947322  380284 kubeadm.go:310] 
	I1105 17:41:45.947364  380284 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1105 17:41:45.947430  380284 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1105 17:41:45.947492  380284 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1105 17:41:45.947498  380284 kubeadm.go:310] 
	I1105 17:41:45.947569  380284 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1105 17:41:45.947633  380284 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1105 17:41:45.947639  380284 kubeadm.go:310] 
	I1105 17:41:45.947711  380284 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token aiczkk.j3d9xiarhtn86cs5 \
	I1105 17:41:45.947797  380284 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:26979d6044967558e10df51ed449079389e167190b8e033ee03f08ca428c5c98 \
	I1105 17:41:45.947817  380284 kubeadm.go:310] 	--control-plane 
	I1105 17:41:45.947823  380284 kubeadm.go:310] 
	I1105 17:41:45.947891  380284 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1105 17:41:45.947899  380284 kubeadm.go:310] 
	I1105 17:41:45.947981  380284 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token aiczkk.j3d9xiarhtn86cs5 \
	I1105 17:41:45.948091  380284 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:26979d6044967558e10df51ed449079389e167190b8e033ee03f08ca428c5c98 
	I1105 17:41:45.948111  380284 cni.go:84] Creating CNI manager for ""
	I1105 17:41:45.948123  380284 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:41:45.949599  380284 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1105 17:41:45.950970  380284 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1105 17:41:45.955024  380284 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1105 17:41:45.955045  380284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1105 17:41:45.972673  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1105 17:41:46.179580  380284 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1105 17:41:46.179654  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:46.179669  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-335216 minikube.k8s.io/updated_at=2024_11_05T17_41_46_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911 minikube.k8s.io/name=addons-335216 minikube.k8s.io/primary=true
	I1105 17:41:46.187447  380284 ops.go:34] apiserver oom_adj: -16
	I1105 17:41:46.279111  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:46.779863  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:47.279626  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:47.780184  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:48.280157  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:48.779178  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:49.279777  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:49.779839  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:50.279284  380284 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1105 17:41:50.366405  380284 kubeadm.go:1113] duration metric: took 4.18681703s to wait for elevateKubeSystemPrivileges
	I1105 17:41:50.366436  380284 kubeadm.go:394] duration metric: took 13.18454487s to StartCluster
	I1105 17:41:50.366455  380284 settings.go:142] acquiring lock: {Name:mk06e2ad91da0ca8589f8bcaa7f56df99870c0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:50.366582  380284 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19910-372139/kubeconfig
	I1105 17:41:50.366952  380284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19910-372139/kubeconfig: {Name:mkb7c38510e0c6aaf1c7e975d1bf2e2a3964d41e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1105 17:41:50.367130  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1105 17:41:50.367155  380284 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1105 17:41:50.367212  380284 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1105 17:41:50.367327  380284 addons.go:69] Setting yakd=true in profile "addons-335216"
	I1105 17:41:50.367333  380284 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-335216"
	I1105 17:41:50.367358  380284 config.go:182] Loaded profile config "addons-335216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:41:50.367377  380284 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-335216"
	I1105 17:41:50.367395  380284 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-335216"
	I1105 17:41:50.367401  380284 addons.go:69] Setting ingress=true in profile "addons-335216"
	I1105 17:41:50.367395  380284 addons.go:69] Setting metrics-server=true in profile "addons-335216"
	I1105 17:41:50.367412  380284 addons.go:69] Setting storage-provisioner=true in profile "addons-335216"
	I1105 17:41:50.367419  380284 addons.go:69] Setting volcano=true in profile "addons-335216"
	I1105 17:41:50.367422  380284 addons.go:234] Setting addon metrics-server=true in "addons-335216"
	I1105 17:41:50.367427  380284 addons.go:69] Setting default-storageclass=true in profile "addons-335216"
	I1105 17:41:50.367429  380284 addons.go:234] Setting addon storage-provisioner=true in "addons-335216"
	I1105 17:41:50.367432  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.367438  380284 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-335216"
	I1105 17:41:50.367450  380284 addons.go:69] Setting gcp-auth=true in profile "addons-335216"
	I1105 17:41:50.367452  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.367455  380284 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-335216"
	I1105 17:41:50.367463  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.367468  380284 mustload.go:65] Loading cluster: addons-335216
	I1105 17:41:50.367431  380284 addons.go:234] Setting addon volcano=true in "addons-335216"
	I1105 17:41:50.367566  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.367378  380284 addons.go:69] Setting cloud-spanner=true in profile "addons-335216"
	I1105 17:41:50.367440  380284 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-335216"
	I1105 17:41:50.367601  380284 addons.go:234] Setting addon cloud-spanner=true in "addons-335216"
	I1105 17:41:50.367623  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.367662  380284 config.go:182] Loaded profile config "addons-335216": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1105 17:41:50.367858  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.367879  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.367898  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.367958  380284 addons.go:234] Setting addon yakd=true in "addons-335216"
	I1105 17:41:50.367997  380284 addons.go:69] Setting registry=true in profile "addons-335216"
	I1105 17:41:50.368019  380284 addons.go:234] Setting addon registry=true in "addons-335216"
	I1105 17:41:50.368029  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.368056  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.368075  380284 addons.go:69] Setting volumesnapshots=true in profile "addons-335216"
	I1105 17:41:50.368094  380284 addons.go:234] Setting addon volumesnapshots=true in "addons-335216"
	I1105 17:41:50.368112  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.367978  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.368365  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.368539  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.368698  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.367999  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.368952  380284 addons.go:69] Setting ingress-dns=true in profile "addons-335216"
	I1105 17:41:50.369022  380284 addons.go:234] Setting addon ingress-dns=true in "addons-335216"
	I1105 17:41:50.369102  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.369622  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.367417  380284 addons.go:234] Setting addon ingress=true in "addons-335216"
	I1105 17:41:50.370119  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.370598  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.371013  380284 addons.go:69] Setting inspektor-gadget=true in profile "addons-335216"
	I1105 17:41:50.371041  380284 addons.go:234] Setting addon inspektor-gadget=true in "addons-335216"
	I1105 17:41:50.371089  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.373865  380284 out.go:177] * Verifying Kubernetes components...
	I1105 17:41:50.368059  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.367408  380284 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-335216"
	I1105 17:41:50.374524  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.375025  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.367989  380284 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-335216"
	I1105 17:41:50.377537  380284 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-335216"
	I1105 17:41:50.377588  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.378284  380284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1105 17:41:50.403836  380284 addons.go:234] Setting addon default-storageclass=true in "addons-335216"
	I1105 17:41:50.403900  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.404422  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.405398  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.405634  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.406029  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.407527  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.409834  380284 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-335216"
	I1105 17:41:50.409880  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.410223  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:50.410349  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:50.413059  380284 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1105 17:41:50.413182  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1105 17:41:50.414534  380284 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1105 17:41:50.414616  380284 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1105 17:41:50.414714  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.415700  380284 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1105 17:41:50.415721  380284 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1105 17:41:50.415775  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.420760  380284 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1105 17:41:50.422470  380284 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1105 17:41:50.422557  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1105 17:41:50.422707  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.431710  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1105 17:41:50.435739  380284 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	W1105 17:41:50.437234  380284 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1105 17:41:50.443063  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1105 17:41:50.443108  380284 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1105 17:41:50.443128  380284 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1105 17:41:50.443277  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.449102  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1105 17:41:50.450516  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1105 17:41:50.462593  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1105 17:41:50.465271  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.472420  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1105 17:41:50.472429  380284 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1105 17:41:50.474343  380284 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1105 17:41:50.474376  380284 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1105 17:41:50.474446  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.476696  380284 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1105 17:41:50.476718  380284 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1105 17:41:50.476767  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.478366  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1105 17:41:50.479768  380284 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1105 17:41:50.480925  380284 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1105 17:41:50.480945  380284 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1105 17:41:50.481035  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.489506  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.495511  380284 out.go:177]   - Using image docker.io/registry:2.8.3
	I1105 17:41:50.496964  380284 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1105 17:41:50.498424  380284 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1105 17:41:50.498448  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1105 17:41:50.498517  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.502288  380284 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1105 17:41:50.504444  380284 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:41:50.504463  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1105 17:41:50.504522  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.507423  380284 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1105 17:41:50.508649  380284 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:41:50.508670  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1105 17:41:50.508731  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.508979  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.519134  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.521524  380284 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1105 17:41:50.522945  380284 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:41:50.522972  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1105 17:41:50.523035  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.524690  380284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1105 17:41:50.526171  380284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:41:50.527538  380284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:41:50.528794  380284 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:41:50.528815  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1105 17:41:50.528876  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.533046  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.535201  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.536310  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.538637  380284 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1105 17:41:50.538646  380284 out.go:177]   - Using image docker.io/busybox:stable
	I1105 17:41:50.539992  380284 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1105 17:41:50.540130  380284 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:41:50.540147  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1105 17:41:50.540200  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.541230  380284 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:41:50.541261  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1105 17:41:50.541324  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:50.542608  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.547415  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.552010  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.558825  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.574876  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1105 17:41:50.584644  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.590359  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.592603  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:50.675353  380284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1105 17:41:50.862892  380284 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1105 17:41:50.862999  380284 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1105 17:41:50.867716  380284 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1105 17:41:50.867813  380284 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1105 17:41:50.878485  380284 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1105 17:41:50.878597  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1105 17:41:51.054644  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1105 17:41:51.055550  380284 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:41:51.055625  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1105 17:41:51.057306  380284 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1105 17:41:51.057364  380284 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1105 17:41:51.064538  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1105 17:41:51.067715  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1105 17:41:51.070422  380284 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:41:51.070452  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1105 17:41:51.156176  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1105 17:41:51.156352  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1105 17:41:51.157125  380284 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1105 17:41:51.157148  380284 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1105 17:41:51.157470  380284 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1105 17:41:51.157489  380284 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1105 17:41:51.159044  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1105 17:41:51.171279  380284 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1105 17:41:51.171314  380284 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1105 17:41:51.270423  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1105 17:41:51.353908  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1105 17:41:51.358983  380284 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1105 17:41:51.359075  380284 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1105 17:41:51.362336  380284 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1105 17:41:51.362559  380284 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1105 17:41:51.367941  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1105 17:41:51.375641  380284 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1105 17:41:51.375676  380284 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1105 17:41:51.453643  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1105 17:41:51.557453  380284 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1105 17:41:51.557491  380284 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1105 17:41:51.558814  380284 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1105 17:41:51.558895  380284 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1105 17:41:51.561103  380284 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:41:51.561126  380284 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1105 17:41:51.754407  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1105 17:41:51.870937  380284 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1105 17:41:51.870966  380284 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1105 17:41:51.966519  380284 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1105 17:41:51.966640  380284 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1105 17:41:52.071716  380284 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:41:52.071822  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1105 17:41:52.259547  380284 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:41:52.259642  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1105 17:41:52.376199  380284 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.801284379s)
	I1105 17:41:52.376317  380284 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1105 17:41:52.377734  380284 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.702345637s)
	I1105 17:41:52.378950  380284 node_ready.go:35] waiting up to 6m0s for node "addons-335216" to be "Ready" ...
	I1105 17:41:52.455218  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1105 17:41:52.468285  380284 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1105 17:41:52.468322  380284 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1105 17:41:52.956387  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:41:52.959653  380284 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1105 17:41:52.959755  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1105 17:41:53.264254  380284 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-335216" context rescaled to 1 replicas
	I1105 17:41:53.473584  380284 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1105 17:41:53.473677  380284 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1105 17:41:54.060376  380284 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1105 17:41:54.060415  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1105 17:41:54.258968  380284 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1105 17:41:54.259057  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1105 17:41:54.371214  380284 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:41:54.371311  380284 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1105 17:41:54.460813  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.406044671s)
	I1105 17:41:54.460703  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.396121842s)
	I1105 17:41:54.465202  380284 node_ready.go:53] node "addons-335216" has status "Ready":"False"
	I1105 17:41:54.654926  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1105 17:41:55.254771  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.187003937s)
	I1105 17:41:55.254870  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.098667579s)
	I1105 17:41:55.359402  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.200329389s)
	I1105 17:41:55.359331  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.202941215s)
	I1105 17:41:55.558663  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.28813398s)
	I1105 17:41:55.558756  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.204747201s)
	I1105 17:41:56.880880  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.512893097s)
	I1105 17:41:56.880937  380284 addons.go:475] Verifying addon ingress=true in "addons-335216"
	I1105 17:41:56.881043  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.126525997s)
	I1105 17:41:56.881115  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.425859002s)
	I1105 17:41:56.881128  380284 addons.go:475] Verifying addon metrics-server=true in "addons-335216"
	I1105 17:41:56.880941  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.427195922s)
	I1105 17:41:56.881159  380284 addons.go:475] Verifying addon registry=true in "addons-335216"
	I1105 17:41:56.882596  380284 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-335216 service yakd-dashboard -n yakd-dashboard
	
	I1105 17:41:56.882618  380284 out.go:177] * Verifying registry addon...
	I1105 17:41:56.882618  380284 out.go:177] * Verifying ingress addon...
	I1105 17:41:56.885355  380284 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1105 17:41:56.885355  380284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1105 17:41:56.956090  380284 node_ready.go:53] node "addons-335216" has status "Ready":"False"
	I1105 17:41:56.960501  380284 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:41:56.960534  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:41:56.960740  380284 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1105 17:41:56.960765  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:41:57.389241  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:41:57.389755  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:41:57.667849  380284 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1105 17:41:57.667937  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:57.691604  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:57.884416  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.927975232s)
	W1105 17:41:57.884469  380284 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:41:57.884504  380284 retry.go:31] will retry after 307.299352ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1105 17:41:57.890789  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:41:57.891282  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:41:58.070684  380284 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1105 17:41:58.090754  380284 addons.go:234] Setting addon gcp-auth=true in "addons-335216"
	I1105 17:41:58.090826  380284 host.go:66] Checking if "addons-335216" exists ...
	I1105 17:41:58.091330  380284 cli_runner.go:164] Run: docker container inspect addons-335216 --format={{.State.Status}}
	I1105 17:41:58.110672  380284 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1105 17:41:58.110717  380284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-335216
	I1105 17:41:58.126859  380284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19910-372139/.minikube/machines/addons-335216/id_rsa Username:docker}
	I1105 17:41:58.192658  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1105 17:41:58.388145  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:41:58.389097  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:41:58.675544  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.020547695s)
	I1105 17:41:58.675592  380284 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-335216"
	I1105 17:41:58.676810  380284 out.go:177] * Verifying csi-hostpath-driver addon...
	I1105 17:41:58.679113  380284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1105 17:41:58.686087  380284 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:41:58.686113  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:41:58.889721  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:41:58.890223  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:41:59.182834  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:41:59.382695  380284 node_ready.go:53] node "addons-335216" has status "Ready":"False"
	I1105 17:41:59.388689  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:41:59.389338  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:41:59.683801  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:41:59.889170  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:41:59.889696  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:00.183228  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:00.388300  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:00.388474  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:00.682384  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:00.888674  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:00.889165  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:01.076917  380284 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.884198963s)
	I1105 17:42:01.076965  380284 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.966270947s)
	I1105 17:42:01.078745  380284 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1105 17:42:01.080362  380284 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1105 17:42:01.081614  380284 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1105 17:42:01.081633  380284 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1105 17:42:01.099867  380284 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1105 17:42:01.099900  380284 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1105 17:42:01.118659  380284 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:42:01.118688  380284 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1105 17:42:01.136220  380284 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1105 17:42:01.182714  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:01.389702  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:01.390151  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:01.470736  380284 addons.go:475] Verifying addon gcp-auth=true in "addons-335216"
	I1105 17:42:01.472197  380284 out.go:177] * Verifying gcp-auth addon...
	I1105 17:42:01.474566  380284 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1105 17:42:01.489820  380284 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1105 17:42:01.489843  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:01.682803  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:01.882434  380284 node_ready.go:53] node "addons-335216" has status "Ready":"False"
	I1105 17:42:01.888772  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:01.889156  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:01.978503  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:02.182799  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:02.389040  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:02.389504  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:02.477615  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:02.682820  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:02.888384  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:02.888937  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:02.978315  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:03.183431  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:03.389293  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:03.389709  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:03.489876  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:03.683259  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:03.888057  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:03.888479  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:03.978182  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:04.183362  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:04.381959  380284 node_ready.go:53] node "addons-335216" has status "Ready":"False"
	I1105 17:42:04.388786  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:04.389119  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:04.478463  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:04.682443  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:04.888448  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:04.888847  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:04.978258  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:05.183186  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:05.388502  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:05.388860  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:05.478187  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:05.683583  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:05.889031  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:05.889387  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:05.977973  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:06.183248  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:06.383084  380284 node_ready.go:53] node "addons-335216" has status "Ready":"False"
	I1105 17:42:06.388704  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:06.388966  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:06.478313  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:06.682617  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:06.888682  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:06.889183  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:06.978400  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:07.182504  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:07.388985  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:07.389367  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:07.477745  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:07.683119  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:07.888636  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:07.889039  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:07.978499  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:08.182568  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:08.389151  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:08.389449  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:08.477810  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:08.682752  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:08.882829  380284 node_ready.go:53] node "addons-335216" has status "Ready":"False"
	I1105 17:42:08.889245  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:08.889550  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:08.978007  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:09.182989  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:09.388452  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:09.389117  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:09.478159  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:09.683253  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:09.881846  380284 node_ready.go:49] node "addons-335216" has status "Ready":"True"
	I1105 17:42:09.881878  380284 node_ready.go:38] duration metric: took 17.502871163s for node "addons-335216" to be "Ready" ...
	I1105 17:42:09.881891  380284 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:42:09.891151  380284 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:09.900514  380284 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1105 17:42:09.900549  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:09.900610  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:09.985674  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:10.184526  380284 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1105 17:42:10.184550  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:10.392897  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:10.394788  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:10.557902  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:10.685981  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:10.889706  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:10.889961  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:10.978561  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:11.183187  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:11.390377  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:11.390650  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:11.479038  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:11.684404  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:11.889928  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:11.890053  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:11.896455  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:11.981350  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:12.184885  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:12.389400  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:12.389692  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:12.478518  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:12.684235  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:12.890416  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:12.890940  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:12.978188  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:13.184167  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:13.388692  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:13.388934  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:13.477855  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:13.684172  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:13.889800  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:13.890045  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:13.978150  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:14.184342  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:14.390271  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:14.390605  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:14.397391  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:14.478507  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:14.683749  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:14.889910  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:14.891274  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:14.978478  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:15.184604  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:15.389986  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:15.390239  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:15.477718  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:15.683508  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:15.889417  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:15.889862  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:15.979013  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:16.184268  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:16.391156  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:16.391492  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:16.459969  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:16.479004  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:16.688455  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:16.889650  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:16.890445  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:16.978300  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:17.184514  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:17.389213  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:17.389499  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:17.478973  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:17.684554  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:17.889834  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:17.890072  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:17.990208  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:18.184950  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:18.389457  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:18.390021  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:18.479476  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:18.684406  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:18.892374  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:18.892976  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:18.895503  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:18.978267  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:19.183284  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:19.388878  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:19.389189  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:19.478310  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:19.684728  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:19.889542  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:19.890361  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:19.978684  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:20.187711  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:20.389991  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:20.390207  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:20.478980  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:20.686204  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:20.889366  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:20.889773  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:20.896468  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:20.979009  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:21.184629  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:21.390218  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:21.390415  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:21.478185  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:21.685012  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:21.889240  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:21.889527  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:21.978521  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:22.183659  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:22.456860  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:22.458319  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:22.478925  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:22.761700  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:22.891548  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:22.892224  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:22.956163  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:22.978188  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:23.185006  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:23.390006  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:23.390499  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:23.479581  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:23.684398  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:23.889253  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:23.889399  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:23.978991  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:24.184284  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:24.390094  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:24.390591  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:24.478709  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:24.683540  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:24.889219  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:24.889433  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:24.980785  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:25.184012  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:25.389820  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:25.390725  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:25.397603  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:25.478226  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:25.684794  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:25.889316  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:25.889635  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:25.978691  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:26.184271  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:26.391952  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:26.392262  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:26.490292  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:26.685475  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:26.889821  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:26.890089  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:26.990361  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:27.184006  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:27.389529  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:27.390640  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:27.478393  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:27.684010  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:27.889986  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:27.890526  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:27.897424  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:27.978416  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:28.184430  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:28.389183  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:28.389581  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:28.478595  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:28.684276  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:28.890032  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:28.890496  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:28.989766  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:29.183950  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:29.390175  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:29.390680  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:29.491019  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:29.683933  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:29.889282  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:29.889536  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:29.978507  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:30.183729  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:30.389554  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:30.389668  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:30.397349  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:30.478308  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:30.685949  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:30.890114  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:30.890381  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:30.990624  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:31.183515  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:31.389370  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:31.389731  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:31.477662  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:31.683675  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:31.889650  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:31.890188  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:31.977745  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:32.183705  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:32.390013  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:32.390534  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:32.478465  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:32.683943  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:32.889741  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:32.890142  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:32.955633  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:32.979087  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:33.184679  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:33.389556  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:33.389819  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:33.479172  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:33.683858  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:33.890240  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:33.891177  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:33.978133  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:34.184349  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:34.389575  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:34.390719  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:34.478757  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:34.683856  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:34.889694  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:34.889965  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:34.978937  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:35.184070  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:35.389940  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:35.390086  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:35.395856  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:35.478589  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:35.683414  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:35.889865  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:35.890073  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:35.977942  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:36.183831  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:36.389506  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:36.389812  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:36.480389  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:36.684175  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:36.889976  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:36.890554  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:36.990306  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:37.184043  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:37.390123  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:37.390443  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:37.396610  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:37.478693  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:37.684257  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:37.959475  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:37.960677  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:37.978243  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:38.257810  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:38.462549  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:38.462933  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:38.554407  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:38.684688  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:38.955445  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:38.955912  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:38.978084  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:39.184628  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:39.391407  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:39.391768  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:39.397304  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:39.478252  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:39.684330  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:39.890200  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:39.890525  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:39.977900  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:40.184896  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:40.390093  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:40.390854  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:40.478276  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:40.684278  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:40.889777  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:40.890617  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:40.977884  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:41.184168  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:41.389158  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:41.389326  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:41.478591  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:41.684209  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:41.890247  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:41.890898  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:41.896600  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:41.978904  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:42.184519  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:42.389083  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:42.389437  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:42.478133  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:42.684144  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:42.889730  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:42.889803  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:42.978489  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:43.185233  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:43.389290  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:43.389643  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:43.478960  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:43.683645  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:43.889735  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:43.889902  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:43.897092  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:43.978331  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:44.184233  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:44.389557  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:44.390208  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:44.479283  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:44.685285  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:44.889752  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:44.890426  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:44.978741  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:45.185218  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:45.389175  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:45.389907  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:45.478517  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:45.684100  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:45.889353  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:45.889711  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:45.956483  380284 pod_ready.go:103] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:45.979771  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:46.184353  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:46.388959  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:46.389478  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:46.478712  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:46.683998  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:46.994694  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:46.994801  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:46.995189  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:47.185205  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:47.389972  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:47.390212  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:47.477712  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:47.684067  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:47.889750  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:47.889978  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:47.896170  380284 pod_ready.go:93] pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace has status "Ready":"True"
	I1105 17:42:47.896194  380284 pod_ready.go:82] duration metric: took 38.005012818s for pod "amd-gpu-device-plugin-ggn5k" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.896207  380284 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9g7cl" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.900842  380284 pod_ready.go:93] pod "coredns-7c65d6cfc9-9g7cl" in "kube-system" namespace has status "Ready":"True"
	I1105 17:42:47.900867  380284 pod_ready.go:82] duration metric: took 4.650994ms for pod "coredns-7c65d6cfc9-9g7cl" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.900885  380284 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.908165  380284 pod_ready.go:93] pod "etcd-addons-335216" in "kube-system" namespace has status "Ready":"True"
	I1105 17:42:47.908188  380284 pod_ready.go:82] duration metric: took 7.296632ms for pod "etcd-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.908208  380284 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.912871  380284 pod_ready.go:93] pod "kube-apiserver-addons-335216" in "kube-system" namespace has status "Ready":"True"
	I1105 17:42:47.912897  380284 pod_ready.go:82] duration metric: took 4.681937ms for pod "kube-apiserver-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.912907  380284 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.917521  380284 pod_ready.go:93] pod "kube-controller-manager-addons-335216" in "kube-system" namespace has status "Ready":"True"
	I1105 17:42:47.917546  380284 pod_ready.go:82] duration metric: took 4.63263ms for pod "kube-controller-manager-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.917558  380284 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qvf2" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:47.978673  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:48.184366  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:48.295824  380284 pod_ready.go:93] pod "kube-proxy-4qvf2" in "kube-system" namespace has status "Ready":"True"
	I1105 17:42:48.295853  380284 pod_ready.go:82] duration metric: took 378.287861ms for pod "kube-proxy-4qvf2" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:48.295867  380284 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:48.454627  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:48.455216  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:48.478607  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:48.683993  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:48.697933  380284 pod_ready.go:93] pod "kube-scheduler-addons-335216" in "kube-system" namespace has status "Ready":"True"
	I1105 17:42:48.697964  380284 pod_ready.go:82] duration metric: took 402.087441ms for pod "kube-scheduler-addons-335216" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:48.697993  380284 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace to be "Ready" ...
	I1105 17:42:48.890295  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:48.890805  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:48.977923  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:49.185267  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:49.389725  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1105 17:42:49.389864  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:49.478666  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:49.684968  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:49.890137  380284 kapi.go:107] duration metric: took 53.004781987s to wait for kubernetes.io/minikube-addons=registry ...
	I1105 17:42:49.890780  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:49.978496  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:50.183721  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:50.390019  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:50.478562  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:50.683441  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:50.704284  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:50.890790  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:50.979576  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:51.258898  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:51.457424  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:51.478540  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:51.683265  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:51.890628  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:51.978630  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:52.183928  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:52.390076  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:52.478879  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:52.685077  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:52.705404  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:52.890100  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:52.978413  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:53.185501  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:53.389761  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.478386  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:53.683904  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:53.891711  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:53.980237  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:54.257499  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:54.458019  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:54.554780  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:54.685058  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:54.759059  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:54.956685  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:55.055662  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:55.257030  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:55.457734  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:55.563857  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:55.762914  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:55.958624  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:56.055043  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:56.184073  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:56.389073  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:56.479262  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:56.685399  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:56.890993  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:56.977939  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:57.185371  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:57.204422  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:57.390029  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.478310  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:57.687412  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:57.889831  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:57.977944  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:58.185065  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:58.389631  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:58.478116  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:58.684286  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:58.890101  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:58.979559  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:59.184873  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:59.204608  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:42:59.390148  380284 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1105 17:42:59.478778  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:42:59.684894  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:42:59.889860  380284 kapi.go:107] duration metric: took 1m3.004505364s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1105 17:42:59.977917  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:00.185265  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:00.478207  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:00.684509  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:01.051757  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:01.184576  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:01.205968  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:01.478703  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:01.684676  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:01.979007  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:02.184248  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:02.478832  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1105 17:43:02.683997  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:02.979166  380284 kapi.go:107] duration metric: took 1m1.504596418s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1105 17:43:02.981387  380284 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-335216 cluster.
	I1105 17:43:02.982805  380284 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1105 17:43:02.984075  380284 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1105 17:43:03.185080  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:03.207474  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:03.684537  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:04.183817  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:04.684625  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:05.183593  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:05.683596  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:05.704370  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:06.184911  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:06.683657  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:07.184133  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:07.683870  380284 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1105 17:43:07.704704  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:08.184637  380284 kapi.go:107] duration metric: took 1m9.505522633s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1105 17:43:08.188828  380284 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, default-storageclass, storage-provisioner-rancher, inspektor-gadget, amd-gpu-device-plugin, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1105 17:43:08.190317  380284 addons.go:510] duration metric: took 1m17.823105027s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns default-storageclass storage-provisioner-rancher inspektor-gadget amd-gpu-device-plugin metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1105 17:43:10.203781  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:12.704571  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:15.203502  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:17.204457  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:19.204553  380284 pod_ready.go:103] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"False"
	I1105 17:43:20.203850  380284 pod_ready.go:93] pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:20.203877  380284 pod_ready.go:82] duration metric: took 31.505873758s for pod "metrics-server-84c5f94fbc-bgbsw" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:20.203887  380284 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fqv84" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:20.208490  380284 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fqv84" in "kube-system" namespace has status "Ready":"True"
	I1105 17:43:20.208513  380284 pod_ready.go:82] duration metric: took 4.618843ms for pod "nvidia-device-plugin-daemonset-fqv84" in "kube-system" namespace to be "Ready" ...
	I1105 17:43:20.208530  380284 pod_ready.go:39] duration metric: took 1m10.326614307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1105 17:43:20.208547  380284 api_server.go:52] waiting for apiserver process to appear ...
	I1105 17:43:20.208595  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:43:20.208642  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:43:20.245045  380284 cri.go:89] found id: "7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced"
	I1105 17:43:20.245071  380284 cri.go:89] found id: ""
	I1105 17:43:20.245085  380284 logs.go:282] 1 containers: [7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced]
	I1105 17:43:20.245135  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:20.248668  380284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:43:20.248728  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:43:20.285758  380284 cri.go:89] found id: "b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad"
	I1105 17:43:20.285778  380284 cri.go:89] found id: ""
	I1105 17:43:20.285786  380284 logs.go:282] 1 containers: [b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad]
	I1105 17:43:20.285840  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:20.289376  380284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:43:20.289440  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:43:20.325219  380284 cri.go:89] found id: "3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a"
	I1105 17:43:20.325246  380284 cri.go:89] found id: ""
	I1105 17:43:20.325260  380284 logs.go:282] 1 containers: [3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a]
	I1105 17:43:20.325313  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:20.328956  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:43:20.329043  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:43:20.363189  380284 cri.go:89] found id: "297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106"
	I1105 17:43:20.363214  380284 cri.go:89] found id: ""
	I1105 17:43:20.363223  380284 logs.go:282] 1 containers: [297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106]
	I1105 17:43:20.363277  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:20.366750  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:43:20.366814  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:43:20.401523  380284 cri.go:89] found id: "d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615"
	I1105 17:43:20.401546  380284 cri.go:89] found id: ""
	I1105 17:43:20.401556  380284 logs.go:282] 1 containers: [d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615]
	I1105 17:43:20.401618  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:20.404974  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:43:20.405066  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:43:20.439432  380284 cri.go:89] found id: "c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599"
	I1105 17:43:20.439454  380284 cri.go:89] found id: ""
	I1105 17:43:20.439464  380284 logs.go:282] 1 containers: [c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599]
	I1105 17:43:20.439520  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:20.443023  380284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:43:20.443081  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:43:20.477949  380284 cri.go:89] found id: "dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2"
	I1105 17:43:20.477977  380284 cri.go:89] found id: ""
	I1105 17:43:20.477989  380284 logs.go:282] 1 containers: [dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2]
	I1105 17:43:20.478043  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:20.481599  380284 logs.go:123] Gathering logs for container status ...
	I1105 17:43:20.481625  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:43:20.523951  380284 logs.go:123] Gathering logs for kubelet ...
	I1105 17:43:20.523990  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 17:43:20.606885  380284 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:43:20.606935  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:43:20.708845  380284 logs.go:123] Gathering logs for kube-apiserver [7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced] ...
	I1105 17:43:20.708875  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced"
	I1105 17:43:20.752539  380284 logs.go:123] Gathering logs for etcd [b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad] ...
	I1105 17:43:20.752580  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad"
	I1105 17:43:20.803189  380284 logs.go:123] Gathering logs for kube-scheduler [297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106] ...
	I1105 17:43:20.803227  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106"
	I1105 17:43:20.846729  380284 logs.go:123] Gathering logs for kube-controller-manager [c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599] ...
	I1105 17:43:20.846774  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599"
	I1105 17:43:20.904533  380284 logs.go:123] Gathering logs for dmesg ...
	I1105 17:43:20.904581  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:43:20.932903  380284 logs.go:123] Gathering logs for coredns [3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a] ...
	I1105 17:43:20.932940  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a"
	I1105 17:43:20.968195  380284 logs.go:123] Gathering logs for kube-proxy [d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615] ...
	I1105 17:43:20.968223  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615"
	I1105 17:43:21.001860  380284 logs.go:123] Gathering logs for kindnet [dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2] ...
	I1105 17:43:21.001896  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2"
	I1105 17:43:21.036003  380284 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:43:21.036044  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:43:23.615324  380284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1105 17:43:23.629228  380284 api_server.go:72] duration metric: took 1m33.262035177s to wait for apiserver process to appear ...
	I1105 17:43:23.629256  380284 api_server.go:88] waiting for apiserver healthz status ...
	I1105 17:43:23.629303  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:43:23.629360  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:43:23.663477  380284 cri.go:89] found id: "7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced"
	I1105 17:43:23.663506  380284 cri.go:89] found id: ""
	I1105 17:43:23.663514  380284 logs.go:282] 1 containers: [7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced]
	I1105 17:43:23.663575  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:23.667007  380284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:43:23.667079  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:43:23.700895  380284 cri.go:89] found id: "b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad"
	I1105 17:43:23.700921  380284 cri.go:89] found id: ""
	I1105 17:43:23.700931  380284 logs.go:282] 1 containers: [b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad]
	I1105 17:43:23.701012  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:23.704569  380284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:43:23.704638  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:43:23.737932  380284 cri.go:89] found id: "3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a"
	I1105 17:43:23.737954  380284 cri.go:89] found id: ""
	I1105 17:43:23.737962  380284 logs.go:282] 1 containers: [3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a]
	I1105 17:43:23.738006  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:23.741457  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:43:23.741520  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:43:23.774459  380284 cri.go:89] found id: "297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106"
	I1105 17:43:23.774484  380284 cri.go:89] found id: ""
	I1105 17:43:23.774493  380284 logs.go:282] 1 containers: [297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106]
	I1105 17:43:23.774538  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:23.777973  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:43:23.778032  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:43:23.812153  380284 cri.go:89] found id: "d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615"
	I1105 17:43:23.812176  380284 cri.go:89] found id: ""
	I1105 17:43:23.812184  380284 logs.go:282] 1 containers: [d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615]
	I1105 17:43:23.812228  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:23.815805  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:43:23.815868  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:43:23.850327  380284 cri.go:89] found id: "c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599"
	I1105 17:43:23.850352  380284 cri.go:89] found id: ""
	I1105 17:43:23.850360  380284 logs.go:282] 1 containers: [c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599]
	I1105 17:43:23.850409  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:23.853760  380284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:43:23.853812  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:43:23.887904  380284 cri.go:89] found id: "dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2"
	I1105 17:43:23.887926  380284 cri.go:89] found id: ""
	I1105 17:43:23.887936  380284 logs.go:282] 1 containers: [dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2]
	I1105 17:43:23.887993  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:23.891342  380284 logs.go:123] Gathering logs for kube-apiserver [7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced] ...
	I1105 17:43:23.891371  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced"
	I1105 17:43:23.935539  380284 logs.go:123] Gathering logs for etcd [b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad] ...
	I1105 17:43:23.935570  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad"
	I1105 17:43:23.985404  380284 logs.go:123] Gathering logs for kube-controller-manager [c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599] ...
	I1105 17:43:23.985439  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599"
	I1105 17:43:24.040603  380284 logs.go:123] Gathering logs for container status ...
	I1105 17:43:24.040649  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:43:24.084200  380284 logs.go:123] Gathering logs for kubelet ...
	I1105 17:43:24.084232  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 17:43:24.165650  380284 logs.go:123] Gathering logs for dmesg ...
	I1105 17:43:24.165701  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:43:24.195384  380284 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:43:24.195432  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:43:24.301434  380284 logs.go:123] Gathering logs for kindnet [dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2] ...
	I1105 17:43:24.301476  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2"
	I1105 17:43:24.335643  380284 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:43:24.335674  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:43:24.412770  380284 logs.go:123] Gathering logs for coredns [3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a] ...
	I1105 17:43:24.412812  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a"
	I1105 17:43:24.448277  380284 logs.go:123] Gathering logs for kube-scheduler [297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106] ...
	I1105 17:43:24.448323  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106"
	I1105 17:43:24.489613  380284 logs.go:123] Gathering logs for kube-proxy [d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615] ...
	I1105 17:43:24.489644  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615"
	I1105 17:43:27.023673  380284 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1105 17:43:27.027867  380284 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1105 17:43:27.028864  380284 api_server.go:141] control plane version: v1.31.2
	I1105 17:43:27.028894  380284 api_server.go:131] duration metric: took 3.39963205s to wait for apiserver health ...
	I1105 17:43:27.028902  380284 system_pods.go:43] waiting for kube-system pods to appear ...
	I1105 17:43:27.028930  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1105 17:43:27.028978  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1105 17:43:27.066659  380284 cri.go:89] found id: "7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced"
	I1105 17:43:27.066683  380284 cri.go:89] found id: ""
	I1105 17:43:27.066695  380284 logs.go:282] 1 containers: [7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced]
	I1105 17:43:27.066746  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:27.070045  380284 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1105 17:43:27.070099  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1105 17:43:27.106080  380284 cri.go:89] found id: "b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad"
	I1105 17:43:27.106104  380284 cri.go:89] found id: ""
	I1105 17:43:27.106114  380284 logs.go:282] 1 containers: [b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad]
	I1105 17:43:27.106180  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:27.110022  380284 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1105 17:43:27.110096  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1105 17:43:27.147855  380284 cri.go:89] found id: "3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a"
	I1105 17:43:27.147877  380284 cri.go:89] found id: ""
	I1105 17:43:27.147886  380284 logs.go:282] 1 containers: [3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a]
	I1105 17:43:27.147931  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:27.151652  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1105 17:43:27.151727  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1105 17:43:27.187117  380284 cri.go:89] found id: "297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106"
	I1105 17:43:27.187143  380284 cri.go:89] found id: ""
	I1105 17:43:27.187156  380284 logs.go:282] 1 containers: [297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106]
	I1105 17:43:27.187224  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:27.190520  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1105 17:43:27.190585  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1105 17:43:27.224933  380284 cri.go:89] found id: "d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615"
	I1105 17:43:27.224956  380284 cri.go:89] found id: ""
	I1105 17:43:27.224966  380284 logs.go:282] 1 containers: [d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615]
	I1105 17:43:27.225045  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:27.228402  380284 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1105 17:43:27.228474  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1105 17:43:27.262585  380284 cri.go:89] found id: "c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599"
	I1105 17:43:27.262606  380284 cri.go:89] found id: ""
	I1105 17:43:27.262614  380284 logs.go:282] 1 containers: [c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599]
	I1105 17:43:27.262661  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:27.266214  380284 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1105 17:43:27.266278  380284 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1105 17:43:27.300060  380284 cri.go:89] found id: "dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2"
	I1105 17:43:27.300085  380284 cri.go:89] found id: ""
	I1105 17:43:27.300094  380284 logs.go:282] 1 containers: [dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2]
	I1105 17:43:27.300157  380284 ssh_runner.go:195] Run: which crictl
	I1105 17:43:27.303953  380284 logs.go:123] Gathering logs for dmesg ...
	I1105 17:43:27.303987  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1105 17:43:27.331187  380284 logs.go:123] Gathering logs for describe nodes ...
	I1105 17:43:27.331227  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1105 17:43:27.434978  380284 logs.go:123] Gathering logs for kube-scheduler [297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106] ...
	I1105 17:43:27.435013  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106"
	I1105 17:43:27.475771  380284 logs.go:123] Gathering logs for kube-proxy [d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615] ...
	I1105 17:43:27.475804  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615"
	I1105 17:43:27.509702  380284 logs.go:123] Gathering logs for kube-controller-manager [c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599] ...
	I1105 17:43:27.509730  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599"
	I1105 17:43:27.563794  380284 logs.go:123] Gathering logs for kubelet ...
	I1105 17:43:27.563845  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1105 17:43:27.650782  380284 logs.go:123] Gathering logs for etcd [b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad] ...
	I1105 17:43:27.650830  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad"
	I1105 17:43:27.703922  380284 logs.go:123] Gathering logs for coredns [3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a] ...
	I1105 17:43:27.703959  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a"
	I1105 17:43:27.743473  380284 logs.go:123] Gathering logs for kindnet [dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2] ...
	I1105 17:43:27.743510  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2"
	I1105 17:43:27.777488  380284 logs.go:123] Gathering logs for CRI-O ...
	I1105 17:43:27.777518  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1105 17:43:27.851872  380284 logs.go:123] Gathering logs for container status ...
	I1105 17:43:27.851915  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1105 17:43:27.894085  380284 logs.go:123] Gathering logs for kube-apiserver [7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced] ...
	I1105 17:43:27.894122  380284 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced"
	I1105 17:43:30.449824  380284 system_pods.go:59] 19 kube-system pods found
	I1105 17:43:30.449860  380284 system_pods.go:61] "amd-gpu-device-plugin-ggn5k" [f3089d19-7be2-497b-8b15-629248ca3e32] Running
	I1105 17:43:30.449865  380284 system_pods.go:61] "coredns-7c65d6cfc9-9g7cl" [1400120f-11b7-4b6f-b017-cfc65065c46a] Running
	I1105 17:43:30.449869  380284 system_pods.go:61] "csi-hostpath-attacher-0" [00ffb7ea-fcd5-4c5a-99dc-29e6ff506f2f] Running
	I1105 17:43:30.449873  380284 system_pods.go:61] "csi-hostpath-resizer-0" [f7786e97-0da1-42ca-9ca3-878e82b2cbf4] Running
	I1105 17:43:30.449876  380284 system_pods.go:61] "csi-hostpathplugin-66kdl" [93901a93-90f3-4391-883a-a83931b16854] Running
	I1105 17:43:30.449879  380284 system_pods.go:61] "etcd-addons-335216" [c56ea170-9646-4e94-8b8a-b8b22ce1c49a] Running
	I1105 17:43:30.449882  380284 system_pods.go:61] "kindnet-rhdwr" [7bda0d1c-16b0-431b-9d4a-56c55fcddfce] Running
	I1105 17:43:30.449885  380284 system_pods.go:61] "kube-apiserver-addons-335216" [27e714e5-b824-4ccc-8806-d8b269c264b7] Running
	I1105 17:43:30.449888  380284 system_pods.go:61] "kube-controller-manager-addons-335216" [fb2e1186-c2ae-4d7a-afe5-664df0624145] Running
	I1105 17:43:30.449891  380284 system_pods.go:61] "kube-ingress-dns-minikube" [b5c961a2-f2f7-4182-a94d-9d823ee9e14b] Running
	I1105 17:43:30.449894  380284 system_pods.go:61] "kube-proxy-4qvf2" [bf87b596-a254-4bad-a923-3e3f55fa26c4] Running
	I1105 17:43:30.449899  380284 system_pods.go:61] "kube-scheduler-addons-335216" [1bed5165-dffb-4e5c-b07c-28e619e0c4eb] Running
	I1105 17:43:30.449904  380284 system_pods.go:61] "metrics-server-84c5f94fbc-bgbsw" [90bcd4f0-cd37-4ffb-a522-ef12b1a784f9] Running
	I1105 17:43:30.449907  380284 system_pods.go:61] "nvidia-device-plugin-daemonset-fqv84" [d8e371f4-fe52-48f2-8409-5b3c0d6c06ec] Running
	I1105 17:43:30.449910  380284 system_pods.go:61] "registry-66c9cd494c-8bj69" [2ea9bc79-8111-49cd-bba6-d3ec85a2cec7] Running
	I1105 17:43:30.449913  380284 system_pods.go:61] "registry-proxy-44k92" [38a87139-f1d5-4ea4-bfaa-491844cde446] Running
	I1105 17:43:30.449917  380284 system_pods.go:61] "snapshot-controller-56fcc65765-qq7ql" [972be8c1-b22d-436e-914b-8ee4dec45503] Running
	I1105 17:43:30.449922  380284 system_pods.go:61] "snapshot-controller-56fcc65765-s6knp" [96ed1ae5-de0b-4a2c-802b-6b2c6576dc6f] Running
	I1105 17:43:30.449925  380284 system_pods.go:61] "storage-provisioner" [e9b29d37-731f-4d2e-b35d-eff88bb77fff] Running
	I1105 17:43:30.449934  380284 system_pods.go:74] duration metric: took 3.421026015s to wait for pod list to return data ...
	I1105 17:43:30.449946  380284 default_sa.go:34] waiting for default service account to be created ...
	I1105 17:43:30.452581  380284 default_sa.go:45] found service account: "default"
	I1105 17:43:30.452605  380284 default_sa.go:55] duration metric: took 2.65033ms for default service account to be created ...
	I1105 17:43:30.452614  380284 system_pods.go:116] waiting for k8s-apps to be running ...
	I1105 17:43:30.460765  380284 system_pods.go:86] 19 kube-system pods found
	I1105 17:43:30.460803  380284 system_pods.go:89] "amd-gpu-device-plugin-ggn5k" [f3089d19-7be2-497b-8b15-629248ca3e32] Running
	I1105 17:43:30.460812  380284 system_pods.go:89] "coredns-7c65d6cfc9-9g7cl" [1400120f-11b7-4b6f-b017-cfc65065c46a] Running
	I1105 17:43:30.460818  380284 system_pods.go:89] "csi-hostpath-attacher-0" [00ffb7ea-fcd5-4c5a-99dc-29e6ff506f2f] Running
	I1105 17:43:30.460823  380284 system_pods.go:89] "csi-hostpath-resizer-0" [f7786e97-0da1-42ca-9ca3-878e82b2cbf4] Running
	I1105 17:43:30.460828  380284 system_pods.go:89] "csi-hostpathplugin-66kdl" [93901a93-90f3-4391-883a-a83931b16854] Running
	I1105 17:43:30.460832  380284 system_pods.go:89] "etcd-addons-335216" [c56ea170-9646-4e94-8b8a-b8b22ce1c49a] Running
	I1105 17:43:30.460839  380284 system_pods.go:89] "kindnet-rhdwr" [7bda0d1c-16b0-431b-9d4a-56c55fcddfce] Running
	I1105 17:43:30.460847  380284 system_pods.go:89] "kube-apiserver-addons-335216" [27e714e5-b824-4ccc-8806-d8b269c264b7] Running
	I1105 17:43:30.460853  380284 system_pods.go:89] "kube-controller-manager-addons-335216" [fb2e1186-c2ae-4d7a-afe5-664df0624145] Running
	I1105 17:43:30.460861  380284 system_pods.go:89] "kube-ingress-dns-minikube" [b5c961a2-f2f7-4182-a94d-9d823ee9e14b] Running
	I1105 17:43:30.460867  380284 system_pods.go:89] "kube-proxy-4qvf2" [bf87b596-a254-4bad-a923-3e3f55fa26c4] Running
	I1105 17:43:30.460875  380284 system_pods.go:89] "kube-scheduler-addons-335216" [1bed5165-dffb-4e5c-b07c-28e619e0c4eb] Running
	I1105 17:43:30.460884  380284 system_pods.go:89] "metrics-server-84c5f94fbc-bgbsw" [90bcd4f0-cd37-4ffb-a522-ef12b1a784f9] Running
	I1105 17:43:30.460894  380284 system_pods.go:89] "nvidia-device-plugin-daemonset-fqv84" [d8e371f4-fe52-48f2-8409-5b3c0d6c06ec] Running
	I1105 17:43:30.460903  380284 system_pods.go:89] "registry-66c9cd494c-8bj69" [2ea9bc79-8111-49cd-bba6-d3ec85a2cec7] Running
	I1105 17:43:30.460909  380284 system_pods.go:89] "registry-proxy-44k92" [38a87139-f1d5-4ea4-bfaa-491844cde446] Running
	I1105 17:43:30.460914  380284 system_pods.go:89] "snapshot-controller-56fcc65765-qq7ql" [972be8c1-b22d-436e-914b-8ee4dec45503] Running
	I1105 17:43:30.460920  380284 system_pods.go:89] "snapshot-controller-56fcc65765-s6knp" [96ed1ae5-de0b-4a2c-802b-6b2c6576dc6f] Running
	I1105 17:43:30.460927  380284 system_pods.go:89] "storage-provisioner" [e9b29d37-731f-4d2e-b35d-eff88bb77fff] Running
	I1105 17:43:30.460937  380284 system_pods.go:126] duration metric: took 8.316796ms to wait for k8s-apps to be running ...
	I1105 17:43:30.460951  380284 system_svc.go:44] waiting for kubelet service to be running ....
	I1105 17:43:30.461039  380284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1105 17:43:30.472583  380284 system_svc.go:56] duration metric: took 11.623159ms WaitForService to wait for kubelet
	I1105 17:43:30.472613  380284 kubeadm.go:582] duration metric: took 1m40.10543045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1105 17:43:30.472641  380284 node_conditions.go:102] verifying NodePressure condition ...
	I1105 17:43:30.475870  380284 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1105 17:43:30.475896  380284 node_conditions.go:123] node cpu capacity is 8
	I1105 17:43:30.475912  380284 node_conditions.go:105] duration metric: took 3.265058ms to run NodePressure ...
	I1105 17:43:30.475928  380284 start.go:241] waiting for startup goroutines ...
	I1105 17:43:30.475940  380284 start.go:246] waiting for cluster config update ...
	I1105 17:43:30.475959  380284 start.go:255] writing updated cluster config ...
	I1105 17:43:30.476304  380284 ssh_runner.go:195] Run: rm -f paused
	I1105 17:43:30.527647  380284 start.go:600] kubectl: 1.31.2, cluster: 1.31.2 (minor skew: 0)
	I1105 17:43:30.529899  380284 out.go:177] * Done! kubectl is now configured to use "addons-335216" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 05 17:49:48 addons-335216 crio[1026]: time="2024-11-05 17:49:48.078608079Z" level=info msg="Stopped pod sandbox: a023c8f0f30ef0fe746e911dd5757b21a3456a8b421505540d4222c4ffe9f365" id=eb8ce34a-1413-455a-9757-4faf7972cd3a name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:49:48 addons-335216 crio[1026]: time="2024-11-05 17:49:48.267348153Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=43052213-c568-40fb-b811-7e8481903c46 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:49:48 addons-335216 crio[1026]: time="2024-11-05 17:49:48.267655941Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=43052213-c568-40fb-b811-7e8481903c46 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:49:48 addons-335216 crio[1026]: time="2024-11-05 17:49:48.933618486Z" level=info msg="Removing container: 875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7" id=3b396fc8-5d93-4873-b772-7ce348ca7c3d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 05 17:49:48 addons-335216 crio[1026]: time="2024-11-05 17:49:48.948975872Z" level=info msg="Removed container 875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7: kube-system/metrics-server-84c5f94fbc-bgbsw/metrics-server" id=3b396fc8-5d93-4873-b772-7ce348ca7c3d name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 05 17:49:50 addons-335216 crio[1026]: time="2024-11-05 17:49:50.267984716Z" level=info msg="Checking image status: busybox:stable" id=e3c2bfbe-f3cb-486c-9d50-0a7cfb5cd3ff name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:49:50 addons-335216 crio[1026]: time="2024-11-05 17:49:50.268162869Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Nov 05 17:49:50 addons-335216 crio[1026]: time="2024-11-05 17:49:50.268257873Z" level=info msg="Image busybox:stable not found" id=e3c2bfbe-f3cb-486c-9d50-0a7cfb5cd3ff name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:49:59 addons-335216 crio[1026]: time="2024-11-05 17:49:59.267213722Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e2a62872-c3bd-4ddf-9199-0ae6363aaf97 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:49:59 addons-335216 crio[1026]: time="2024-11-05 17:49:59.267496250Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=e2a62872-c3bd-4ddf-9199-0ae6363aaf97 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:50:04 addons-335216 crio[1026]: time="2024-11-05 17:50:04.267900772Z" level=info msg="Checking image status: busybox:stable" id=03f5970e-343d-4a81-92ac-25cc5ae6362f name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:50:04 addons-335216 crio[1026]: time="2024-11-05 17:50:04.268140247Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Nov 05 17:50:04 addons-335216 crio[1026]: time="2024-11-05 17:50:04.268304385Z" level=info msg="Image busybox:stable not found" id=03f5970e-343d-4a81-92ac-25cc5ae6362f name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:50:18 addons-335216 crio[1026]: time="2024-11-05 17:50:18.267430408Z" level=info msg="Checking image status: busybox:stable" id=ff3cd841-2d5c-4f10-acfb-12862058bae7 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:50:18 addons-335216 crio[1026]: time="2024-11-05 17:50:18.267645826Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Nov 05 17:50:18 addons-335216 crio[1026]: time="2024-11-05 17:50:18.267753753Z" level=info msg="Image busybox:stable not found" id=ff3cd841-2d5c-4f10-acfb-12862058bae7 name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:50:31 addons-335216 crio[1026]: time="2024-11-05 17:50:31.267272496Z" level=info msg="Checking image status: busybox:stable" id=f4551774-57f4-43e4-8669-aca55c2eaddf name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:50:31 addons-335216 crio[1026]: time="2024-11-05 17:50:31.267482630Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Nov 05 17:50:31 addons-335216 crio[1026]: time="2024-11-05 17:50:31.267648026Z" level=info msg="Image busybox:stable not found" id=f4551774-57f4-43e4-8669-aca55c2eaddf name=/runtime.v1.ImageService/ImageStatus
	Nov 05 17:50:36 addons-335216 crio[1026]: time="2024-11-05 17:50:36.350782436Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=19cdf987-a20d-4311-9a18-f41789616087 name=/runtime.v1.ImageService/PullImage
	Nov 05 17:50:36 addons-335216 crio[1026]: time="2024-11-05 17:50:36.367171040Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 05 17:50:45 addons-335216 crio[1026]: time="2024-11-05 17:50:45.571313519Z" level=info msg="Stopping pod sandbox: a023c8f0f30ef0fe746e911dd5757b21a3456a8b421505540d4222c4ffe9f365" id=e416b64c-c934-4981-b67f-a670cc79a2bd name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:50:45 addons-335216 crio[1026]: time="2024-11-05 17:50:45.571379117Z" level=info msg="Stopped pod sandbox (already stopped): a023c8f0f30ef0fe746e911dd5757b21a3456a8b421505540d4222c4ffe9f365" id=e416b64c-c934-4981-b67f-a670cc79a2bd name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 05 17:50:45 addons-335216 crio[1026]: time="2024-11-05 17:50:45.571812085Z" level=info msg="Removing pod sandbox: a023c8f0f30ef0fe746e911dd5757b21a3456a8b421505540d4222c4ffe9f365" id=a3f27bea-aa15-4281-8a85-8814e8044fcd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 05 17:50:45 addons-335216 crio[1026]: time="2024-11-05 17:50:45.578565223Z" level=info msg="Removed pod sandbox: a023c8f0f30ef0fe746e911dd5757b21a3456a8b421505540d4222c4ffe9f365" id=a3f27bea-aa15-4281-8a85-8814e8044fcd name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	70f1ca1ddb7ae       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                                              6 minutes ago       Running             nginx                                    0                   ce21c09a7808e       nginx
	2181b275925e2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          7 minutes ago       Running             busybox                                  0                   c41a299cd6fc8       busybox
	2f153fda3f7a1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   829a3bfe362d1       csi-hostpathplugin-66kdl
	724ad526a7440       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   829a3bfe362d1       csi-hostpathplugin-66kdl
	82424d2ad29aa       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   829a3bfe362d1       csi-hostpathplugin-66kdl
	1469cfb6cd3f5       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   829a3bfe362d1       csi-hostpathplugin-66kdl
	d35227f0c0e24       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   829a3bfe362d1       csi-hostpathplugin-66kdl
	259caac9c5b0f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   8 minutes ago       Running             csi-external-health-monitor-controller   0                   829a3bfe362d1       csi-hostpathplugin-66kdl
	94ded17e79610       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              8 minutes ago       Running             csi-resizer                              0                   619078faf52e2       csi-hostpath-resizer-0
	947d157f771f5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             8 minutes ago       Running             csi-attacher                             0                   827d9a5597f8b       csi-hostpath-attacher-0
	5b5700ebbd178       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   8556a4e89e129       snapshot-controller-56fcc65765-qq7ql
	6bf597adcf295       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   9a038ff2d979d       snapshot-controller-56fcc65765-s6knp
	3b9ff7ae728a1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             8 minutes ago       Running             coredns                                  0                   16f13e4bd49f7       coredns-7c65d6cfc9-9g7cl
	6704fad068753       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   bcc853369a3bf       storage-provisioner
	dd19a944834ad       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                                           8 minutes ago       Running             kindnet-cni                              0                   60a5d9f4a8a41       kindnet-rhdwr
	d1826b432739c       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                                             8 minutes ago       Running             kube-proxy                               0                   5d1b298acb50e       kube-proxy-4qvf2
	b06aff6eec894       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             9 minutes ago       Running             etcd                                     0                   bb82ba72a34e3       etcd-addons-335216
	7c47c42c90c1d       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                                             9 minutes ago       Running             kube-apiserver                           0                   cdecca2f0d47c       kube-apiserver-addons-335216
	c378179c676b7       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                                             9 minutes ago       Running             kube-controller-manager                  0                   299efb5e8f48b       kube-controller-manager-addons-335216
	297d55526c163       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                                             9 minutes ago       Running             kube-scheduler                           0                   a56a6310689cb       kube-scheduler-addons-335216
	
	
	==> coredns [3b9ff7ae728a176728ac7fba0c50d307ad2ecaf06ee6aebe565cba6b16de3d5a] <==
	[INFO] 10.244.0.21:54180 - 58558 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005449376s
	[INFO] 10.244.0.21:49924 - 54801 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005265932s
	[INFO] 10.244.0.21:49571 - 16704 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00562881s
	[INFO] 10.244.0.21:52364 - 41370 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005732065s
	[INFO] 10.244.0.21:58548 - 14985 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005893987s
	[INFO] 10.244.0.21:55974 - 44426 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006315625s
	[INFO] 10.244.0.21:56556 - 60338 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007202055s
	[INFO] 10.244.0.21:58548 - 27043 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001331729s
	[INFO] 10.244.0.21:49924 - 39500 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001479122s
	[INFO] 10.244.0.21:52364 - 41810 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.001395794s
	[INFO] 10.244.0.21:58690 - 58926 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.007129413s
	[INFO] 10.244.0.21:54180 - 24086 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.002098939s
	[INFO] 10.244.0.21:54180 - 19849 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005022s
	[INFO] 10.244.0.21:49924 - 18835 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033338s
	[INFO] 10.244.0.21:58548 - 35623 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081049s
	[INFO] 10.244.0.21:56556 - 11873 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053205s
	[INFO] 10.244.0.21:58690 - 2806 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.041692177s
	[INFO] 10.244.0.21:55974 - 21302 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.042782819s
	[INFO] 10.244.0.21:49571 - 10737 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.04261586s
	[INFO] 10.244.0.21:52364 - 56469 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090306s
	[INFO] 10.244.0.21:49571 - 32717 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088015s
	[INFO] 10.244.0.21:58690 - 62168 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007755485s
	[INFO] 10.244.0.21:55974 - 30234 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00772617s
	[INFO] 10.244.0.21:55974 - 59522 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103216s
	[INFO] 10.244.0.21:58690 - 2648 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000166517s
	
	
	==> describe nodes <==
	Name:               addons-335216
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-335216
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=47b0afc9e70653f81ca813437c4c46b74450b911
	                    minikube.k8s.io/name=addons-335216
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_11_05T17_41_46_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-335216
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-335216"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 05 Nov 2024 17:41:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-335216
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 05 Nov 2024 17:50:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 05 Nov 2024 17:49:55 +0000   Tue, 05 Nov 2024 17:41:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 05 Nov 2024 17:49:55 +0000   Tue, 05 Nov 2024 17:41:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 05 Nov 2024 17:49:55 +0000   Tue, 05 Nov 2024 17:41:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 05 Nov 2024 17:49:55 +0000   Tue, 05 Nov 2024 17:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-335216
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdc5dc7e3b754511b1f71edaa4e18f0d
	  System UUID:                709f5e58-81ee-41f0-b8d3-e98010f45c6e
	  Boot ID:                    74cfa202-babf-4735-bbec-338fcc5191ed
	  Kernel Version:             5.15.0-1070-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  default                     hello-world-app-55bf9c44b4-kgs7g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m43s
	  default                     task-pv-pod-restore                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     test-local-path                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 coredns-7c65d6cfc9-9g7cl                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m57s
	  kube-system                 csi-hostpath-attacher-0                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 csi-hostpath-resizer-0                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m49s
	  kube-system                 csi-hostpathplugin-66kdl                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 etcd-addons-335216                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m2s
	  kube-system                 kindnet-rhdwr                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m58s
	  kube-system                 kube-apiserver-addons-335216             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 kube-controller-manager-addons-335216    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 kube-proxy-4qvf2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m58s
	  kube-system                 kube-scheduler-addons-335216             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 snapshot-controller-56fcc65765-qq7ql     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 snapshot-controller-56fcc65765-s6knp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 8m52s                kube-proxy       
	  Normal   Starting                 9m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m8s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m7s (x8 over 9m7s)  kubelet          Node addons-335216 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m7s (x8 over 9m7s)  kubelet          Node addons-335216 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m7s (x7 over 9m7s)  kubelet          Node addons-335216 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m2s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m2s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m2s                 kubelet          Node addons-335216 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m2s                 kubelet          Node addons-335216 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m2s                 kubelet          Node addons-335216 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m58s                node-controller  Node addons-335216 event: Registered Node addons-335216 in Controller
	  Normal   NodeReady                8m38s                kubelet          Node addons-335216 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: 02 42 ba 1a a6 1f 02 42 c0 a8 55 02 08 00
	[  +0.004044] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-bb04140d975e
	[  +0.000005] ll header: 00000000: 02 42 ba 1a a6 1f 02 42 c0 a8 55 02 08 00
	[  +0.000008] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-bb04140d975e
	[  +0.000001] ll header: 00000000: 02 42 ba 1a a6 1f 02 42 c0 a8 55 02 08 00
	[  +8.187182] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-bb04140d975e
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-bb04140d975e
	[  +0.000006] ll header: 00000000: 02 42 ba 1a a6 1f 02 42 c0 a8 55 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 ba 1a a6 1f 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-bb04140d975e
	[  +0.000002] ll header: 00000000: 02 42 ba 1a a6 1f 02 42 c0 a8 55 02 08 00
	[Nov 5 17:44] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: a2 1c 66 d3 5b 2c 02 54 f5 7f 86 9b 08 00
	[  +1.011574] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: a2 1c 66 d3 5b 2c 02 54 f5 7f 86 9b 08 00
	[  +2.019853] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 1c 66 d3 5b 2c 02 54 f5 7f 86 9b 08 00
	[  +4.091599] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 1c 66 d3 5b 2c 02 54 f5 7f 86 9b 08 00
	[  +8.191283] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: a2 1c 66 d3 5b 2c 02 54 f5 7f 86 9b 08 00
	[ +16.126604] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: a2 1c 66 d3 5b 2c 02 54 f5 7f 86 9b 08 00
	[Nov 5 17:45] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: a2 1c 66 d3 5b 2c 02 54 f5 7f 86 9b 08 00
	
	
	==> etcd [b06aff6eec89416c8dc7c162b0ab86b2bae796d3ec0786751eeeaa16d47e31ad] <==
	{"level":"warn","ts":"2024-11-05T17:41:54.163064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.525056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:41:54.163469Z","caller":"traceutil/trace.go:171","msg":"trace[379517557] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:436; }","duration":"187.924715ms","start":"2024-11-05T17:41:53.975533Z","end":"2024-11-05T17:41:54.163458Z","steps":["trace[379517557] 'agreement among raft nodes before linearized reading'  (duration: 187.502938ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:41:54.165602Z","caller":"traceutil/trace.go:171","msg":"trace[2012423142] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"100.043823ms","start":"2024-11-05T17:41:54.065538Z","end":"2024-11-05T17:41:54.165582Z","steps":["trace[2012423142] 'process raft request'  (duration: 99.692698ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:41:54.166251Z","caller":"traceutil/trace.go:171","msg":"trace[689837818] transaction","detail":"{read_only:false; response_revision:438; number_of_response:1; }","duration":"100.29284ms","start":"2024-11-05T17:41:54.065942Z","end":"2024-11-05T17:41:54.166235Z","steps":["trace[689837818] 'process raft request'  (duration: 99.826089ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:41:54.167236Z","caller":"traceutil/trace.go:171","msg":"trace[995718793] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"100.653675ms","start":"2024-11-05T17:41:54.066568Z","end":"2024-11-05T17:41:54.167222Z","steps":["trace[995718793] 'process raft request'  (duration: 100.013657ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:41:54.167550Z","caller":"traceutil/trace.go:171","msg":"trace[1698660953] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"100.705975ms","start":"2024-11-05T17:41:54.066835Z","end":"2024-11-05T17:41:54.167541Z","steps":["trace[1698660953] 'process raft request'  (duration: 100.075668ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:41:54.655437Z","caller":"traceutil/trace.go:171","msg":"trace[1001683382] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"183.307709ms","start":"2024-11-05T17:41:54.472108Z","end":"2024-11-05T17:41:54.655416Z","steps":["trace[1001683382] 'process raft request'  (duration: 97.005382ms)","trace[1001683382] 'compare'  (duration: 84.497388ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:41:54.655725Z","caller":"traceutil/trace.go:171","msg":"trace[168529455] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:494; }","duration":"183.127576ms","start":"2024-11-05T17:41:54.472584Z","end":"2024-11-05T17:41:54.655711Z","steps":["trace[168529455] 'read index received'  (duration: 3.724926ms)","trace[168529455] 'applied index is now lower than readState.Index'  (duration: 179.401443ms)"],"step_count":2}
	{"level":"warn","ts":"2024-11-05T17:41:54.655926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.220851ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/registry-proxy\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:41:54.657087Z","caller":"traceutil/trace.go:171","msg":"trace[1083822170] range","detail":"{range_begin:/registry/daemonsets/kube-system/registry-proxy; range_end:; response_count:0; response_revision:486; }","duration":"185.38381ms","start":"2024-11-05T17:41:54.471685Z","end":"2024-11-05T17:41:54.657069Z","steps":["trace[1083822170] 'agreement among raft nodes before linearized reading'  (duration: 184.197769ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:41:54.656225Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.207027ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-11-05T17:41:54.657351Z","caller":"traceutil/trace.go:171","msg":"trace[233791820] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:1; response_revision:486; }","duration":"186.474066ms","start":"2024-11-05T17:41:54.470866Z","end":"2024-11-05T17:41:54.657340Z","steps":["trace[233791820] 'agreement among raft nodes before linearized reading'  (duration: 185.11341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:41:54.656413Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.764434ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-11-05T17:41:54.664046Z","caller":"traceutil/trace.go:171","msg":"trace[1843032122] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:486; }","duration":"110.387318ms","start":"2024-11-05T17:41:54.553640Z","end":"2024-11-05T17:41:54.664027Z","steps":["trace[1843032122] 'agreement among raft nodes before linearized reading'  (duration: 102.654622ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:41:54.657970Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.236068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/local-path-provisioner-role\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:41:54.664458Z","caller":"traceutil/trace.go:171","msg":"trace[1883510683] range","detail":"{range_begin:/registry/clusterroles/local-path-provisioner-role; range_end:; response_count:0; response_revision:486; }","duration":"109.724808ms","start":"2024-11-05T17:41:54.554719Z","end":"2024-11-05T17:41:54.664444Z","steps":["trace[1883510683] 'agreement among raft nodes before linearized reading'  (duration: 103.213126ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:42:46.989519Z","caller":"traceutil/trace.go:171","msg":"trace[625977649] transaction","detail":"{read_only:false; response_revision:1110; number_of_response:1; }","duration":"133.028352ms","start":"2024-11-05T17:42:46.856471Z","end":"2024-11-05T17:42:46.989499Z","steps":["trace[625977649] 'process raft request'  (duration: 118.587345ms)","trace[625977649] 'compare'  (duration: 14.324801ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:42:46.991070Z","caller":"traceutil/trace.go:171","msg":"trace[1086063403] linearizableReadLoop","detail":"{readStateIndex:1141; appliedIndex:1139; }","duration":"103.947284ms","start":"2024-11-05T17:42:46.887105Z","end":"2024-11-05T17:42:46.991053Z","steps":["trace[1086063403] 'read index received'  (duration: 87.9642ms)","trace[1086063403] 'applied index is now lower than readState.Index'  (duration: 15.982479ms)"],"step_count":2}
	{"level":"info","ts":"2024-11-05T17:42:46.991164Z","caller":"traceutil/trace.go:171","msg":"trace[1692591119] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"122.882103ms","start":"2024-11-05T17:42:46.868263Z","end":"2024-11-05T17:42:46.991145Z","steps":["trace[1692591119] 'process raft request'  (duration: 122.667617ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:42:46.991215Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.086231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-11-05T17:42:46.991260Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.433085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-11-05T17:42:46.991294Z","caller":"traceutil/trace.go:171","msg":"trace[1876722032] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1111; }","duration":"100.468185ms","start":"2024-11-05T17:42:46.890818Z","end":"2024-11-05T17:42:46.991287Z","steps":["trace[1876722032] 'agreement among raft nodes before linearized reading'  (duration: 100.345619ms)"],"step_count":1}
	{"level":"warn","ts":"2024-11-05T17:42:46.991432Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.321452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-11-05T17:42:46.991460Z","caller":"traceutil/trace.go:171","msg":"trace[1287302410] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"104.352459ms","start":"2024-11-05T17:42:46.887101Z","end":"2024-11-05T17:42:46.991453Z","steps":["trace[1287302410] 'agreement among raft nodes before linearized reading'  (duration: 104.309394ms)"],"step_count":1}
	{"level":"info","ts":"2024-11-05T17:42:46.991267Z","caller":"traceutil/trace.go:171","msg":"trace[775880284] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"104.138938ms","start":"2024-11-05T17:42:46.887102Z","end":"2024-11-05T17:42:46.991241Z","steps":["trace[775880284] 'agreement among raft nodes before linearized reading'  (duration: 104.041863ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:50:47 up  1:33,  0 users,  load average: 0.28, 0.28, 0.48
	Linux addons-335216 5.15.0-1070-gcp #78~20.04.1-Ubuntu SMP Wed Oct 9 22:05:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [dd19a944834adbb59b5579fc207d9ef94fe29a0e4593100251e55ec81f6612f2] <==
	I1105 17:48:39.277180       1 main.go:301] handling current node
	I1105 17:48:49.271328       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:48:49.271364       1 main.go:301] handling current node
	I1105 17:48:59.270635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:48:59.270674       1 main.go:301] handling current node
	I1105 17:49:09.277407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:49:09.277459       1 main.go:301] handling current node
	I1105 17:49:19.271436       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:49:19.271472       1 main.go:301] handling current node
	I1105 17:49:29.272753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:49:29.272802       1 main.go:301] handling current node
	I1105 17:49:39.277967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:49:39.278007       1 main.go:301] handling current node
	I1105 17:49:49.271005       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:49:49.271069       1 main.go:301] handling current node
	I1105 17:49:59.271188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:49:59.271222       1 main.go:301] handling current node
	I1105 17:50:09.277741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:50:09.277771       1 main.go:301] handling current node
	I1105 17:50:19.271113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:50:19.271146       1 main.go:301] handling current node
	I1105 17:50:29.271223       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:50:29.271261       1 main.go:301] handling current node
	I1105 17:50:39.277086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1105 17:50:39.277126       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7c47c42c90c1d4bb2f0f42ce0f953e8810e87fdb2ac13005702d4cbddc510ced] <==
	E1105 17:42:56.777237       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1105 17:42:56.777266       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1105 17:42:56.778371       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1105 17:42:56.778396       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1105 17:43:20.127779       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.155.110:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.155.110:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.155.110:443: connect: connection refused" logger="UnhandledError"
	W1105 17:43:20.127799       1 handler_proxy.go:99] no RequestInfo found in the context
	E1105 17:43:20.127872       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1105 17:43:20.129669       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.155.110:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.155.110:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.155.110:443: connect: connection refused" logger="UnhandledError"
	I1105 17:43:20.162609       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1105 17:43:38.222292       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38324: use of closed network connection
	E1105 17:43:38.393950       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38354: use of closed network connection
	I1105 17:43:47.426379       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.184.184"}
	I1105 17:44:04.381899       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1105 17:44:04.549419       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.20.84"}
	I1105 17:44:08.353998       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1105 17:44:09.470507       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1105 17:44:28.515784       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1105 17:46:24.166814       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.233.213"}
	I1105 17:50:21.178319       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [c378179c676b7d624f8314f2fce69d3c9d73b6e02036bbef62453d34d4094599] <==
	I1105 17:46:28.122469       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I1105 17:46:28.124467       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5f85ff4588" duration="8.537µs"
	I1105 17:46:28.126513       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I1105 17:46:38.328087       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I1105 17:46:40.828602       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="10.182µs"
	I1105 17:46:51.016783       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I1105 17:46:57.981892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="9.109µs"
	W1105 17:47:09.418628       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:47:09.418677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1105 17:47:21.858594       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="5.103µs"
	I1105 17:47:32.640692       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="65.528µs"
	W1105 17:47:43.521215       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:47:43.521265       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1105 17:47:45.278194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="145.53µs"
	I1105 17:48:09.387453       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W1105 17:48:41.194239       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:48:41.194308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1105 17:49:15.868156       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:49:15.868203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1105 17:49:46.780820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="22.43µs"
	I1105 17:49:48.277147       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="66.18µs"
	I1105 17:49:55.266480       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-335216"
	I1105 17:49:59.275761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="61.614µs"
	W1105 17:50:15.783038       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1105 17:50:15.783091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d1826b432739cb80c96769a2246290e986debd39ccb95252a959e90c92b62615] <==
	I1105 17:41:53.174534       1 server_linux.go:66] "Using iptables proxy"
	I1105 17:41:54.453451       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1105 17:41:54.453707       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1105 17:41:54.872880       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1105 17:41:54.873049       1 server_linux.go:169] "Using iptables Proxier"
	I1105 17:41:54.954663       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1105 17:41:54.955387       1 server.go:483] "Version info" version="v1.31.2"
	I1105 17:41:54.955666       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1105 17:41:54.957255       1 config.go:199] "Starting service config controller"
	I1105 17:41:54.958483       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1105 17:41:54.958211       1 config.go:328] "Starting node config controller"
	I1105 17:41:54.958615       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1105 17:41:54.957787       1 config.go:105] "Starting endpoint slice config controller"
	I1105 17:41:54.958688       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1105 17:41:55.060748       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1105 17:41:55.063293       1 shared_informer.go:320] Caches are synced for service config
	I1105 17:41:55.063322       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [297d55526c16303fddcd9f847e35ecb946562823c8b599af7a098e502f008106] <==
	W1105 17:41:42.783066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 17:41:42.783079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:42.854008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1105 17:41:42.854066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.590586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1105 17:41:43.590638       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.637484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1105 17:41:43.637529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.701237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1105 17:41:43.701284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.747581       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1105 17:41:43.747629       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1105 17:41:43.761112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1105 17:41:43.761154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.791952       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1105 17:41:43.792003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.908664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1105 17:41:43.908706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.927009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1105 17:41:43.927053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.927818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1105 17:41:43.927845       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1105 17:41:43.986474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1105 17:41:43.986530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1105 17:41:46.679391       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 05 17:49:48 addons-335216 kubelet[1624]: I1105 17:49:48.932472    1624 scope.go:117] "RemoveContainer" containerID="875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7"
	Nov 05 17:49:48 addons-335216 kubelet[1624]: I1105 17:49:48.949342    1624 scope.go:117] "RemoveContainer" containerID="875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7"
	Nov 05 17:49:48 addons-335216 kubelet[1624]: E1105 17:49:48.949758    1624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7\": container with ID starting with 875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7 not found: ID does not exist" containerID="875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7"
	Nov 05 17:49:48 addons-335216 kubelet[1624]: I1105 17:49:48.949801    1624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7"} err="failed to get container status \"875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7\": rpc error: code = NotFound desc = could not find container \"875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7\": container with ID starting with 875d79ecfa2817cdf5cd4ba5ebe549156cdc8ad8a98a8b9eb1d435ce74983ce7 not found: ID does not exist"
	Nov 05 17:49:49 addons-335216 kubelet[1624]: I1105 17:49:49.266944    1624 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 05 17:49:49 addons-335216 kubelet[1624]: I1105 17:49:49.268543    1624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90bcd4f0-cd37-4ffb-a522-ef12b1a784f9" path="/var/lib/kubelet/pods/90bcd4f0-cd37-4ffb-a522-ef12b1a784f9/volumes"
	Nov 05 17:49:50 addons-335216 kubelet[1624]: E1105 17:49:50.268542    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="63da9a66-8853-4fc5-ab65-2d39598cbba0"
	Nov 05 17:49:55 addons-335216 kubelet[1624]: E1105 17:49:55.537347    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828995537100387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:49:55 addons-335216 kubelet[1624]: E1105 17:49:55.537389    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730828995537100387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:04 addons-335216 kubelet[1624]: E1105 17:50:04.268591    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="63da9a66-8853-4fc5-ab65-2d39598cbba0"
	Nov 05 17:50:05 addons-335216 kubelet[1624]: E1105 17:50:05.539178    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829005538988477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:05 addons-335216 kubelet[1624]: E1105 17:50:05.539225    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829005538988477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:15 addons-335216 kubelet[1624]: E1105 17:50:15.541031    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829015540785487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:15 addons-335216 kubelet[1624]: E1105 17:50:15.541077    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829015540785487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:18 addons-335216 kubelet[1624]: E1105 17:50:18.268020    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\"\"" pod="default/test-local-path" podUID="63da9a66-8853-4fc5-ab65-2d39598cbba0"
	Nov 05 17:50:25 addons-335216 kubelet[1624]: E1105 17:50:25.543753    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829025543554052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:25 addons-335216 kubelet[1624]: E1105 17:50:25.543787    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829025543554052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:35 addons-335216 kubelet[1624]: E1105 17:50:35.545659    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829035545448541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:35 addons-335216 kubelet[1624]: E1105 17:50:35.545700    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829035545448541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:36 addons-335216 kubelet[1624]: E1105 17:50:36.350198    1624 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 05 17:50:36 addons-335216 kubelet[1624]: E1105 17:50:36.350300    1624 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 05 17:50:36 addons-335216 kubelet[1624]: E1105 17:50:36.350586    1624 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82jtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod-restore_default(2b0585ef-8725-44ca-aba1-bd7737a1af78): ErrImagePull: loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 05 17:50:36 addons-335216 kubelet[1624]: E1105 17:50:36.351940    1624 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="2b0585ef-8725-44ca-aba1-bd7737a1af78"
	Nov 05 17:50:45 addons-335216 kubelet[1624]: E1105 17:50:45.547704    1624 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829045547456898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 05 17:50:45 addons-335216 kubelet[1624]: E1105 17:50:45.547744    1624 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1730829045547456898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598969,},InodesUsed:&UInt64Value{Value:230,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [6704fad06875389be423a925d46d644d56731a9ccdc7fd5b20d93d9d529f53a5] <==
	I1105 17:42:10.758917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1105 17:42:10.766963       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1105 17:42:10.767004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1105 17:42:10.775229       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1105 17:42:10.775330       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3553372e-4149-447f-aadf-6e2865562ec3", APIVersion:"v1", ResourceVersion:"933", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-335216_1c9aa4ff-2c2c-4b78-b565-7b9654fdc8bd became leader
	I1105 17:42:10.775430       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-335216_1c9aa4ff-2c2c-4b78-b565-7b9654fdc8bd!
	I1105 17:42:10.875776       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-335216_1c9aa4ff-2c2c-4b78-b565-7b9654fdc8bd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-335216 -n addons-335216
helpers_test.go:261: (dbg) Run:  kubectl --context addons-335216 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-kgs7g task-pv-pod-restore test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-335216 describe pod hello-world-app-55bf9c44b4-kgs7g task-pv-pod-restore test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-335216 describe pod hello-world-app-55bf9c44b4-kgs7g task-pv-pod-restore test-local-path:

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-kgs7g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-335216/192.168.49.2
	Start Time:       Tue, 05 Nov 2024 17:46:23 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:           10.244.0.31
	Controlled By:  ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xg682 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xg682:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  4m25s                default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-kgs7g to addons-335216
	  Warning  Failed     73s (x2 over 3m16s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     73s (x2 over 3m16s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    60s (x2 over 3m16s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     60s (x2 over 3m16s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    49s (x3 over 4m24s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-335216/192.168.49.2
	Start Time:       Tue, 05 Nov 2024 17:44:45 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82jtj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-82jtj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-335216
	  Warning  Failed     5m25s                kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    97s (x5 over 5m24s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     97s (x5 over 5m24s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    85s (x4 over 6m3s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     12s (x4 over 5m25s)  kubelet            Error: ErrImagePull
	  Warning  Failed     12s (x3 over 4m17s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:367678a80c0be120f67f3adfccc2f408bd2c1319ed98c1975ac88e750d0efe26 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-335216/192.168.49.2
	Start Time:       Tue, 05 Nov 2024 17:44:18 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cd45k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-cd45k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m30s                 default-scheduler  Successfully assigned default/test-local-path to addons-335216
	  Warning  Failed     5m56s                 kubelet            Failed to pull image "busybox:stable": determining manifest MIME type for docker://busybox:stable: reading manifest sha256:9186e638ccc30c5d1a2efd5a2cd632f49bb5013f164f6f85c48ed6fce90fe38f in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m4s (x4 over 6m27s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     104s (x4 over 5m56s)  kubelet            Error: ErrImagePull
	  Warning  Failed     104s (x3 over 4m54s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    69s (x7 over 5m55s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     69s (x7 over 5m55s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-335216 addons disable volumesnapshots --alsologtostderr -v=1
panic: test timed out after 2h0m0s
	running tests:
		TestAddons (1h59m44s)
		TestAddons/parallel/CSI (1h56m58s)

                                                
                                                
goroutine 1218 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 119 minutes]:
testing.(*T).Run(0xc00015a340, {0x2c4f931?, 0x0?}, 0x35f58b8)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
testing.runTests.func1(0xc00015a340)
	/usr/local/go/src/testing/testing.go:2168 +0x37
testing.tRunner(0xc00015a340, 0xc000a85bc8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
testing.runTests(0xc0001381f8, {0x52c36c0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x4113b0?, 0x52e9e20?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000551720)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000551720)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000523780)
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/home/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 113 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 112
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 103 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc00015a000)
	/usr/local/go/src/testing/testing.go:1484 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc00015a000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestOffline(0xc00015a000)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc00015a000, 0x35f59c8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 111 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000778810, 0x17)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000a81d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3987b80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000778840)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000740040, {0x3933440, 0xc0009ac5d0}, 0x1, 0xc000112310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000740040, 0x3b9aca00, 0x0, 0x1, 0xc000112310)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 172
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 194 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc00160a5a0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 142
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 204 [syscall, 110 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xf, 0xc000a876e8, 0x4, 0xc00144a510, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
os.(*Process).pidfdWait(0xc001d64210?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x236
os.(*Process).wait(0x30?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0004bfe00)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0004bfe00)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014529c0, 0xc0004bfe00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.disableAddon(0xc0014529c0, {0x2c5afb6, 0xf}, {0xc00062e1f0?, 0xc00077a8c0?})
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:992 +0x12d
runtime.Goexit()
	/usr/local/go/src/runtime/panic.go:629 +0x5e
testing.(*common).FailNow(0xc0014529c0)
	/usr/local/go/src/testing/testing.go:1006 +0x4a
testing.(*common).Fatalf(0xc0014529c0, {0x2cc2e03?, 0xc0005b6230?}, {0xc001619d90?, 0xc00062e1f0?, 0xd?})
	/usr/local/go/src/testing/testing.go:1090 +0x5e
k8s.io/minikube/test/integration.validateCSIDriverAndSnapshots({0x396bfd8, 0xc0005b6230}, 0xc0014529c0, {0xc00062e1f0, 0xd})
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:549 +0x1645
k8s.io/minikube/test/integration.TestAddons.func4.1(0xc0014529c0)
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:163 +0x6c
testing.tRunner(0xc0014529c0, 0xc00098b000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 198
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 198 [chan receive, 111 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc001452000, 0xc001352db0)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 104
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 172 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000778840, 0xc000112310)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 148
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 196 [select, 117 minutes]:
net/http.(*persistConn).readLoop(0xc001398ea0)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 158
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 104 [chan receive, 117 minutes]:
testing.(*T).Run(0xc00015a680, {0x2c4b864?, 0x22ecb25c000?}, 0xc001352db0)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestAddons(0xc00015a680)
	/home/jenkins/workspace/Build_Cross/test/integration/addons_test.go:140 +0x2f4
testing.tRunner(0xc00015a680, 0x35f58b8)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 112 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x396c2f0, 0xc000112310}, 0xc001483f50, 0xc001483f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x396c2f0, 0xc000112310}, 0xd8?, 0xc001483f50, 0xc001483f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x396c2f0?, 0xc000112310?}, 0xc0008f04e0?, 0x559a40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000488fd0?, 0x593f04?, 0xc001402150?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 172
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 197 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc001398ea0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 158
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 171 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x39625a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 148
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.2/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 195 [select, 117 minutes]:
net/http.(*persistConn).writeLoop(0xc00160a5a0)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 142
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 1197 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0x7ff0ca146f60, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0014ae600?, 0xc0004f7e00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014ae600, {0xc0004f7e00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009480e0, {0xc0004f7e00?, 0xc00004d520?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001886540, {0x3931920, 0xc000b0c040})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3931aa0, 0xc001886540}, {0x3931920, 0xc000b0c040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009480e0?, {0x3931aa0, 0xc001886540})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0009480e0, {0x3931aa0, 0xc001886540})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3931aa0, 0xc001886540}, {0x39319a0, 0xc0009480e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0000871f0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 204
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                                
goroutine 1198 [IO wait, 110 minutes]:
internal/poll.runtime_pollWait(0x7ff0ca146618, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0014ae6c0?, 0xc001827c25?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014ae6c0, {0xc001827c25, 0x83db, 0x83db})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009480f8, {0xc001827c25?, 0xc00004d520?, 0x10000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001886570, {0x3931920, 0xc001b44008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3931aa0, 0xc001886570}, {0x3931920, 0xc001b44008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009480f8?, {0x3931aa0, 0xc001886570})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0009480f8, {0x3931aa0, 0xc001886570})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3931aa0, 0xc001886570}, {0x39319a0, 0xc0009480f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001e92b80?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 204
	/usr/local/go/src/os/exec/exec.go:732 +0x98b

                                                
                                    

Test pass (12/19)

x
+
TestDownloadOnly/v1.20.0/json-events (9.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-070752 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-070752 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.18568388s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1105 17:41:00.809275  378976 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1105 17:41:00.809366  378976 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-070752
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-070752: exit status 85 (66.777373ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-070752 | jenkins | v1.34.0 | 05 Nov 24 17:40 UTC |          |
	|         | -p download-only-070752        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:40:51
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:40:51.667839  378988 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:40:51.667960  378988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:40:51.667969  378988 out.go:358] Setting ErrFile to fd 2...
	I1105 17:40:51.667973  378988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:40:51.668141  378988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-372139/.minikube/bin
	W1105 17:40:51.668292  378988 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19910-372139/.minikube/config/config.json: open /home/jenkins/minikube-integration/19910-372139/.minikube/config/config.json: no such file or directory
	I1105 17:40:51.668862  378988 out.go:352] Setting JSON to true
	I1105 17:40:51.669854  378988 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5003,"bootTime":1730823449,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 17:40:51.669968  378988 start.go:139] virtualization: kvm guest
	I1105 17:40:51.672422  378988 out.go:97] [download-only-070752] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1105 17:40:51.672555  378988 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball: no such file or directory
	I1105 17:40:51.672622  378988 notify.go:220] Checking for updates...
	I1105 17:40:51.674035  378988 out.go:169] MINIKUBE_LOCATION=19910
	I1105 17:40:51.675390  378988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:40:51.676741  378988 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19910-372139/kubeconfig
	I1105 17:40:51.678048  378988 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-372139/.minikube
	I1105 17:40:51.679533  378988 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1105 17:40:51.681981  378988 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 17:40:51.682246  378988 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:40:51.704394  378988 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 17:40:51.704495  378988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:40:51.749410  378988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-11-05 17:40:51.740001536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1105 17:40:51.749550  378988 docker.go:318] overlay module found
	I1105 17:40:51.751300  378988 out.go:97] Using the docker driver based on user configuration
	I1105 17:40:51.751343  378988 start.go:297] selected driver: docker
	I1105 17:40:51.751353  378988 start.go:901] validating driver "docker" against <nil>
	I1105 17:40:51.751465  378988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:40:51.796035  378988 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-11-05 17:40:51.786813617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1105 17:40:51.796207  378988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:40:51.796723  378988 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1105 17:40:51.796863  378988 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 17:40:51.798679  378988 out.go:169] Using Docker driver with root privileges
	I1105 17:40:51.799921  378988 cni.go:84] Creating CNI manager for ""
	I1105 17:40:51.799979  378988 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1105 17:40:51.799993  378988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1105 17:40:51.800064  378988 start.go:340] cluster config:
	{Name:download-only-070752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-070752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1105 17:40:51.801460  378988 out.go:97] Starting "download-only-070752" primary control-plane node in "download-only-070752" cluster
	I1105 17:40:51.801482  378988 cache.go:121] Beginning downloading kic base image for docker with crio
	I1105 17:40:51.802603  378988 out.go:97] Pulling base image v0.0.45-1730282848-19883 ...
	I1105 17:40:51.802623  378988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 17:40:51.802739  378988 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local docker daemon
	I1105 17:40:51.818403  378988 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:40:51.818585  378988 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 in local cache directory
	I1105 17:40:51.818679  378988 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 to local cache
	I1105 17:40:51.830117  378988 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 17:40:51.830141  378988 cache.go:56] Caching tarball of preloaded images
	I1105 17:40:51.830290  378988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1105 17:40:51.832083  378988 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1105 17:40:51.832101  378988 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1105 17:40:51.863429  378988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1105 17:40:55.397862  378988 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730282848-19883@sha256:e762c909ad2a507083ec25b1ad3091c71fc7d92824e4a659c9158bbfe5ae03d4 as a tarball
	
	
	* The control-plane node download-only-070752 host does not exist
	  To start a cluster, run: "minikube start -p download-only-070752"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-070752
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-989945 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-989945 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.270343711s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1105 17:41:05.486802  378976 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1105 17:41:05.486855  378976 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19910-372139/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-989945
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-989945: exit status 85 (65.830679ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-070752 | jenkins | v1.34.0 | 05 Nov 24 17:40 UTC |                     |
	|         | -p download-only-070752        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| delete  | -p download-only-070752        | download-only-070752 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC | 05 Nov 24 17:41 UTC |
	| start   | -o=json --download-only        | download-only-989945 | jenkins | v1.34.0 | 05 Nov 24 17:41 UTC |                     |
	|         | -p download-only-989945        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/11/05 17:41:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1105 17:41:01.260866  379346 out.go:345] Setting OutFile to fd 1 ...
	I1105 17:41:01.260981  379346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:01.261005  379346 out.go:358] Setting ErrFile to fd 2...
	I1105 17:41:01.261013  379346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1105 17:41:01.261241  379346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19910-372139/.minikube/bin
	I1105 17:41:01.261820  379346 out.go:352] Setting JSON to true
	I1105 17:41:01.262736  379346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5012,"bootTime":1730823449,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1105 17:41:01.262801  379346 start.go:139] virtualization: kvm guest
	I1105 17:41:01.264797  379346 out.go:97] [download-only-989945] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1105 17:41:01.264949  379346 notify.go:220] Checking for updates...
	I1105 17:41:01.266282  379346 out.go:169] MINIKUBE_LOCATION=19910
	I1105 17:41:01.267457  379346 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1105 17:41:01.268513  379346 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19910-372139/kubeconfig
	I1105 17:41:01.269607  379346 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19910-372139/.minikube
	I1105 17:41:01.270874  379346 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1105 17:41:01.273232  379346 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1105 17:41:01.273454  379346 driver.go:394] Setting default libvirt URI to qemu:///system
	I1105 17:41:01.297063  379346 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1105 17:41:01.297139  379346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:41:01.343729  379346 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-11-05 17:41:01.333815475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1105 17:41:01.343850  379346 docker.go:318] overlay module found
	I1105 17:41:01.345691  379346 out.go:97] Using the docker driver based on user configuration
	I1105 17:41:01.345726  379346 start.go:297] selected driver: docker
	I1105 17:41:01.345734  379346 start.go:901] validating driver "docker" against <nil>
	I1105 17:41:01.345856  379346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1105 17:41:01.392929  379346 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-11-05 17:41:01.384116342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1105 17:41:01.393111  379346 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1105 17:41:01.393663  379346 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1105 17:41:01.393811  379346 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1105 17:41:01.395580  379346 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-989945 host does not exist
	  To start a cluster, run: "minikube start -p download-only-989945"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-989945
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.11s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-194835 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-194835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-194835
--- PASS: TestDownloadOnlyKic (1.11s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I1105 17:41:07.315207  378976 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-699458 --alsologtostderr --binary-mirror http://127.0.0.1:41791 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-699458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-699458
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    

Test skip (6/19)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
Copied to clipboard