Test Report: Docker_Linux_containerd 20539

                    
                      404431ee24582bacb75d7cfbedbe3aa3f9ffc1a2:2025-03-17:38754
                    
                

Test fail (14/330)

x
+
TestAddons/parallel/Ingress (491.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-012219 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-012219 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-012219 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a2751ab5-cd1c-44a3-a6ba-dba98b254a96] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:250: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012219 -n addons-012219
addons_test.go:250: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-03-17 12:51:42.019047253 +0000 UTC m=+806.270896887
addons_test.go:250: (dbg) Run:  kubectl --context addons-012219 describe po nginx -n default
addons_test.go:250: (dbg) kubectl --context addons-012219 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-012219/192.168.49.2
Start Time:       Mon, 17 Mar 2025 12:43:41 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.34
IPs:
IP:  10.244.0.34
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hh4v9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hh4v9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-012219
Normal   Pulling    5m (x5 over 8m)         kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     4m57s (x5 over 7m57s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m57s (x5 over 7m57s)   kubelet            Error: ErrImagePull
Warning  Failed     2m57s (x19 over 7m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m29s (x21 over 7m56s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:250: (dbg) Run:  kubectl --context addons-012219 logs nginx -n default
addons_test.go:250: (dbg) Non-zero exit: kubectl --context addons-012219 logs nginx -n default: exit status 1 (78.132039ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:250: kubectl --context addons-012219 logs nginx -n default: exit status 1
addons_test.go:251: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-012219
helpers_test.go:235: (dbg) docker inspect addons-012219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf",
	        "Created": "2025-03-17T12:39:11.6117619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T12:39:11.648441729Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/hosts",
	        "LogPath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf-json.log",
	        "Name": "/addons-012219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-012219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-012219",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf",
	                "LowerDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58-init/diff:/var/lib/docker/overlay2/0d1b72eeaeef000e911d7896b151fb0d0a984c18eeb180d19223ea8ba67fdac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-012219",
	                "Source": "/var/lib/docker/volumes/addons-012219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-012219",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-012219",
	                "name.minikube.sigs.k8s.io": "addons-012219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "452ad4fa346bbc598add717085bed619e052b873df7af970d51fdbc4e83feeb5",
	            "SandboxKey": "/var/run/docker/netns/452ad4fa346b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-012219": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:86:92:3e:af:06",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d3969b4da548f201032412c0cc3078db46294c18bc50d2dd5fac1526b374ada7",
	                    "EndpointID": "0f281088beec74782f3e18095976832a627c3e17e266ddc1de94d77add036cd0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-012219",
	                        "8197043953b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-012219 -n addons-012219
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 logs -n 25: (1.251152135s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-498596              | download-only-498596   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| delete  | -p download-only-960465              | download-only-960465   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| delete  | -p download-only-498596              | download-only-498596   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| start   | --download-only -p                   | download-docker-513231 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | download-docker-513231               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-513231            | download-docker-513231 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| start   | --download-only -p                   | binary-mirror-312807   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | binary-mirror-312807                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45577               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-312807              | binary-mirror-312807   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| addons  | disable dashboard -p                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | addons-012219                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | addons-012219                        |                        |         |         |                     |                     |
	| start   | -p addons-012219 --wait=true         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:42 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:42 UTC | 17 Mar 25 12:42 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | -p addons-012219                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-012219 ip                     | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable cloud-spanner                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:46 UTC | 17 Mar 25 12:47 UTC |
	|         | storage-provisioner-rancher          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:38:48
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:38:48.035665  455052 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:38:48.036294  455052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:48.036342  455052 out.go:358] Setting ErrFile to fd 2...
	I0317 12:38:48.036350  455052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:48.036801  455052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 12:38:48.037791  455052 out.go:352] Setting JSON to false
	I0317 12:38:48.038760  455052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8468,"bootTime":1742206660,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:38:48.038904  455052 start.go:139] virtualization: kvm guest
	I0317 12:38:48.040562  455052 out.go:177] * [addons-012219] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:38:48.041822  455052 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 12:38:48.041817  455052 notify.go:220] Checking for updates...
	I0317 12:38:48.043176  455052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:38:48.044454  455052 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:38:48.045722  455052 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 12:38:48.046826  455052 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 12:38:48.048090  455052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:38:48.049578  455052 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:38:48.074854  455052 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 12:38:48.074957  455052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:48.127449  455052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 12:38:48.118130941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:48.127557  455052 docker.go:318] overlay module found
	I0317 12:38:48.129183  455052 out.go:177] * Using the docker driver based on user configuration
	I0317 12:38:48.130332  455052 start.go:297] selected driver: docker
	I0317 12:38:48.130353  455052 start.go:901] validating driver "docker" against <nil>
	I0317 12:38:48.130368  455052 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:38:48.131173  455052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:48.182534  455052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 12:38:48.173184645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:48.182748  455052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:38:48.182959  455052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:38:48.184638  455052 out.go:177] * Using Docker driver with root privileges
	I0317 12:38:48.185738  455052 cni.go:84] Creating CNI manager for ""
	I0317 12:38:48.185832  455052 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:38:48.185848  455052 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 12:38:48.185934  455052 start.go:340] cluster config:
	{Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:38:48.187222  455052 out.go:177] * Starting "addons-012219" primary control-plane node in "addons-012219" cluster
	I0317 12:38:48.188445  455052 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 12:38:48.189727  455052 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 12:38:48.190812  455052 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:38:48.190861  455052 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 12:38:48.190875  455052 cache.go:56] Caching tarball of preloaded images
	I0317 12:38:48.190891  455052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 12:38:48.191017  455052 preload.go:172] Found /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 12:38:48.191033  455052 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 12:38:48.191471  455052 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/config.json ...
	I0317 12:38:48.191502  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/config.json: {Name:mk5ae75b173bff0b4f3b12df1725ab9cf5ff3206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:38:48.208531  455052 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 12:38:48.208738  455052 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 12:38:48.208767  455052 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0317 12:38:48.208775  455052 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0317 12:38:48.208790  455052 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0317 12:38:48.208801  455052 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from local cache
	I0317 12:39:01.030051  455052 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from cached tarball
	I0317 12:39:01.030110  455052 cache.go:230] Successfully downloaded all kic artifacts
	I0317 12:39:01.030185  455052 start.go:360] acquireMachinesLock for addons-012219: {Name:mk4f9029816aabb75cfe9bdbdbb316adafd6cfa3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:39:01.030314  455052 start.go:364] duration metric: took 105.313µs to acquireMachinesLock for "addons-012219"
	I0317 12:39:01.030358  455052 start.go:93] Provisioning new machine with config: &{Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:39:01.030431  455052 start.go:125] createHost starting for "" (driver="docker")
	I0317 12:39:01.032554  455052 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0317 12:39:01.032866  455052 start.go:159] libmachine.API.Create for "addons-012219" (driver="docker")
	I0317 12:39:01.032909  455052 client.go:168] LocalClient.Create starting
	I0317 12:39:01.033127  455052 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem
	I0317 12:39:01.250466  455052 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem
	I0317 12:39:01.750594  455052 cli_runner.go:164] Run: docker network inspect addons-012219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 12:39:01.770703  455052 cli_runner.go:211] docker network inspect addons-012219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 12:39:01.770792  455052 network_create.go:284] running [docker network inspect addons-012219] to gather additional debugging logs...
	I0317 12:39:01.770810  455052 cli_runner.go:164] Run: docker network inspect addons-012219
	W0317 12:39:01.791389  455052 cli_runner.go:211] docker network inspect addons-012219 returned with exit code 1
	I0317 12:39:01.791428  455052 network_create.go:287] error running [docker network inspect addons-012219]: docker network inspect addons-012219: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-012219 not found
	I0317 12:39:01.791459  455052 network_create.go:289] output of [docker network inspect addons-012219]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-012219 not found
	
	** /stderr **
	I0317 12:39:01.791608  455052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:39:01.812027  455052 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020911c0}
	I0317 12:39:01.812090  455052 network_create.go:124] attempt to create docker network addons-012219 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0317 12:39:01.812146  455052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-012219 addons-012219
	I0317 12:39:01.870671  455052 network_create.go:108] docker network addons-012219 192.168.49.0/24 created
	I0317 12:39:01.870727  455052 kic.go:121] calculated static IP "192.168.49.2" for the "addons-012219" container
	I0317 12:39:01.870809  455052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 12:39:01.888169  455052 cli_runner.go:164] Run: docker volume create addons-012219 --label name.minikube.sigs.k8s.io=addons-012219 --label created_by.minikube.sigs.k8s.io=true
	I0317 12:39:01.907968  455052 oci.go:103] Successfully created a docker volume addons-012219
	I0317 12:39:01.908179  455052 cli_runner.go:164] Run: docker run --rm --name addons-012219-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-012219 --entrypoint /usr/bin/test -v addons-012219:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 12:39:06.872289  455052 cli_runner.go:217] Completed: docker run --rm --name addons-012219-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-012219 --entrypoint /usr/bin/test -v addons-012219:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib: (4.964041866s)
	I0317 12:39:06.872360  455052 oci.go:107] Successfully prepared a docker volume addons-012219
	I0317 12:39:06.872407  455052 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:39:06.872435  455052 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 12:39:06.872519  455052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-012219:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 12:39:11.538297  455052 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-012219:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.665711957s)
	I0317 12:39:11.538334  455052 kic.go:203] duration metric: took 4.665893918s to extract preloaded images to volume ...
	W0317 12:39:11.538500  455052 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 12:39:11.538611  455052 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 12:39:11.593645  455052 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-012219 --name addons-012219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-012219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-012219 --network addons-012219 --ip 192.168.49.2 --volume addons-012219:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 12:39:11.877382  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Running}}
	I0317 12:39:11.896804  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:11.916694  455052 cli_runner.go:164] Run: docker exec addons-012219 stat /var/lib/dpkg/alternatives/iptables
	I0317 12:39:11.961992  455052 oci.go:144] the created container "addons-012219" has a running status.
	I0317 12:39:11.962040  455052 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa...
	I0317 12:39:12.496926  455052 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 12:39:12.520345  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:12.539423  455052 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 12:39:12.539446  455052 kic_runner.go:114] Args: [docker exec --privileged addons-012219 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 12:39:12.591629  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:12.613026  455052 machine.go:93] provisionDockerMachine start ...
	I0317 12:39:12.613173  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:12.632700  455052 main.go:141] libmachine: Using SSH client type: native
	I0317 12:39:12.632985  455052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0317 12:39:12.633003  455052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 12:39:12.768094  455052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012219
	
	I0317 12:39:12.768130  455052 ubuntu.go:169] provisioning hostname "addons-012219"
	I0317 12:39:12.768210  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:12.787598  455052 main.go:141] libmachine: Using SSH client type: native
	I0317 12:39:12.787821  455052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0317 12:39:12.787838  455052 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-012219 && echo "addons-012219" | sudo tee /etc/hostname
	I0317 12:39:12.936726  455052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012219
	
	I0317 12:39:12.936809  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:12.954937  455052 main.go:141] libmachine: Using SSH client type: native
	I0317 12:39:12.955163  455052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0317 12:39:12.955181  455052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-012219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-012219/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-012219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:39:13.093093  455052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:39:13.093132  455052 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20539-446828/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-446828/.minikube}
	I0317 12:39:13.093178  455052 ubuntu.go:177] setting up certificates
	I0317 12:39:13.093191  455052 provision.go:84] configureAuth start
	I0317 12:39:13.093250  455052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-012219
	I0317 12:39:13.111440  455052 provision.go:143] copyHostCerts
	I0317 12:39:13.111541  455052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem (1675 bytes)
	I0317 12:39:13.111698  455052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem (1082 bytes)
	I0317 12:39:13.111825  455052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem (1123 bytes)
	I0317 12:39:13.111941  455052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem org=jenkins.addons-012219 san=[127.0.0.1 192.168.49.2 addons-012219 localhost minikube]
	I0317 12:39:13.162824  455052 provision.go:177] copyRemoteCerts
	I0317 12:39:13.162892  455052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:39:13.162936  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.181586  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.281817  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:39:13.308715  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 12:39:13.335335  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 12:39:13.361942  455052 provision.go:87] duration metric: took 268.734518ms to configureAuth
	I0317 12:39:13.361975  455052 ubuntu.go:193] setting minikube options for container-runtime
	I0317 12:39:13.362170  455052 config.go:182] Loaded profile config "addons-012219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:39:13.362187  455052 machine.go:96] duration metric: took 749.138253ms to provisionDockerMachine
	I0317 12:39:13.362195  455052 client.go:171] duration metric: took 12.329276946s to LocalClient.Create
	I0317 12:39:13.362217  455052 start.go:167] duration metric: took 12.329355429s to libmachine.API.Create "addons-012219"
	I0317 12:39:13.362224  455052 start.go:293] postStartSetup for "addons-012219" (driver="docker")
	I0317 12:39:13.362233  455052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:39:13.362278  455052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:39:13.362314  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.381057  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.482200  455052 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:39:13.485959  455052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 12:39:13.485992  455052 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 12:39:13.486003  455052 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 12:39:13.486012  455052 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 12:39:13.486025  455052 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/addons for local assets ...
	I0317 12:39:13.486108  455052 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/files for local assets ...
	I0317 12:39:13.486140  455052 start.go:296] duration metric: took 123.908916ms for postStartSetup
	I0317 12:39:13.486452  455052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-012219
	I0317 12:39:13.505708  455052 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/config.json ...
	I0317 12:39:13.506012  455052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:39:13.506061  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.524830  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.618002  455052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 12:39:13.622926  455052 start.go:128] duration metric: took 12.592476216s to createHost
	I0317 12:39:13.622956  455052 start.go:83] releasing machines lock for "addons-012219", held for 12.59262781s
	I0317 12:39:13.623035  455052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-012219
	I0317 12:39:13.642865  455052 ssh_runner.go:195] Run: cat /version.json
	I0317 12:39:13.642925  455052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 12:39:13.643002  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.642931  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.663466  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.663820  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.756498  455052 ssh_runner.go:195] Run: systemctl --version
	I0317 12:39:13.835206  455052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 12:39:13.840449  455052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 12:39:13.867139  455052 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 12:39:13.867227  455052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 12:39:13.896974  455052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 12:39:13.897004  455052 start.go:495] detecting cgroup driver to use...
	I0317 12:39:13.897060  455052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 12:39:13.897129  455052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 12:39:13.909957  455052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:39:13.921484  455052 docker.go:217] disabling cri-docker service (if available) ...
	I0317 12:39:13.921564  455052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 12:39:13.935505  455052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 12:39:13.950483  455052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 12:39:14.027108  455052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 12:39:14.108664  455052 docker.go:233] disabling docker service ...
	I0317 12:39:14.108739  455052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 12:39:14.128684  455052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 12:39:14.140295  455052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 12:39:14.223859  455052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 12:39:14.311339  455052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 12:39:14.323210  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:39:14.340391  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 12:39:14.351125  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 12:39:14.362117  455052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 12:39:14.362181  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 12:39:14.372457  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:39:14.383021  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 12:39:14.393284  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:39:14.404023  455052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:39:14.414135  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 12:39:14.424920  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 12:39:14.435840  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 12:39:14.447169  455052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:39:14.455892  455052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:39:14.464970  455052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:39:14.538457  455052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 12:39:14.648203  455052 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 12:39:14.648284  455052 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 12:39:14.652351  455052 start.go:563] Will wait 60s for crictl version
	I0317 12:39:14.652423  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:39:14.655987  455052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 12:39:14.692655  455052 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 12:39:14.692751  455052 ssh_runner.go:195] Run: containerd --version
	I0317 12:39:14.719458  455052 ssh_runner.go:195] Run: containerd --version
	I0317 12:39:14.747914  455052 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 12:39:14.749467  455052 cli_runner.go:164] Run: docker network inspect addons-012219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:39:14.768502  455052 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0317 12:39:14.772651  455052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:39:14.784531  455052 kubeadm.go:883] updating cluster {Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 12:39:14.784658  455052 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:39:14.784705  455052 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:39:14.821811  455052 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:39:14.821838  455052 containerd.go:534] Images already preloaded, skipping extraction
	I0317 12:39:14.821903  455052 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:39:14.856662  455052 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:39:14.856689  455052 cache_images.go:84] Images are preloaded, skipping loading
	I0317 12:39:14.856698  455052 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 containerd true true} ...
	I0317 12:39:14.856794  455052 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-012219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 12:39:14.856848  455052 ssh_runner.go:195] Run: sudo crictl info
	I0317 12:39:14.892646  455052 cni.go:84] Creating CNI manager for ""
	I0317 12:39:14.892679  455052 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:39:14.892696  455052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 12:39:14.892720  455052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-012219 NodeName:addons-012219 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 12:39:14.892840  455052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-012219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 12:39:14.892907  455052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 12:39:14.902144  455052 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 12:39:14.902217  455052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 12:39:14.911539  455052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0317 12:39:14.931119  455052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 12:39:14.949717  455052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0317 12:39:14.968581  455052 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0317 12:39:14.972599  455052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:39:14.985705  455052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:39:15.067453  455052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:39:15.082243  455052 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219 for IP: 192.168.49.2
	I0317 12:39:15.082266  455052 certs.go:194] generating shared ca certs ...
	I0317 12:39:15.082283  455052 certs.go:226] acquiring lock for ca certs: {Name:mk0dd75eca163be7a048e137f4b2d32cf3ae35d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.082507  455052 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key
	I0317 12:39:15.215977  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt ...
	I0317 12:39:15.216013  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt: {Name:mk6c5810acd75cb9b3a95204aeb4923648134fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.216200  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key ...
	I0317 12:39:15.216211  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key: {Name:mk42dd3bc2bef3996c8d9aca4b91a21a3483ce7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.216284  455052 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key
	I0317 12:39:15.438662  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt ...
	I0317 12:39:15.438706  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt: {Name:mk85c7404144a9503537fe74ab1fafce6d5efe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.438914  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key ...
	I0317 12:39:15.438928  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key: {Name:mkb6f268b434dbbb859dd2b57fc506ee093f4f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.439006  455052 certs.go:256] generating profile certs ...
	I0317 12:39:15.439068  455052 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.key
	I0317 12:39:15.439083  455052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt with IP's: []
	I0317 12:39:16.321284  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt ...
	I0317 12:39:16.321323  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: {Name:mk1eac80c5f0c5edd4268bd4c7f32a2877239abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.321507  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.key ...
	I0317 12:39:16.321528  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.key: {Name:mke0c492e654f28c7c87390951b63149bdb94f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.321599  455052 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683
	I0317 12:39:16.321617  455052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0317 12:39:16.548474  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683 ...
	I0317 12:39:16.548517  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683: {Name:mkd89694ae1d0c6fe037f6c581e4c6ae7215f3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.548694  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683 ...
	I0317 12:39:16.548708  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683: {Name:mk5c4663ccf0ea3254d6e2b196b6a7b99f9d07d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.548785  455052 certs.go:381] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683 -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt
	I0317 12:39:16.548860  455052 certs.go:385] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683 -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key
	I0317 12:39:16.548905  455052 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key
	I0317 12:39:16.548921  455052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt with IP's: []
	I0317 12:39:17.036293  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt ...
	I0317 12:39:17.036358  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt: {Name:mk30436cd124ef55c65e6fe2ce66a0585594f30b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:17.036538  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key ...
	I0317 12:39:17.036552  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key: {Name:mk6febb43cba57732591cbb93ae48f0cb1241b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:17.036737  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 12:39:17.036779  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem (1082 bytes)
	I0317 12:39:17.036800  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem (1123 bytes)
	I0317 12:39:17.036819  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem (1675 bytes)
	I0317 12:39:17.037536  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 12:39:17.063150  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 12:39:17.089188  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 12:39:17.114585  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 12:39:17.140004  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 12:39:17.165950  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 12:39:17.191067  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 12:39:17.215893  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 12:39:17.241779  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 12:39:17.267836  455052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 12:39:17.286839  455052 ssh_runner.go:195] Run: openssl version
	I0317 12:39:17.293157  455052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 12:39:17.303415  455052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:39:17.307327  455052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:39 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:39:17.307401  455052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:39:17.314186  455052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 12:39:17.324679  455052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 12:39:17.328279  455052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:39:17.328363  455052 kubeadm.go:392] StartCluster: {Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:39:17.328462  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 12:39:17.328541  455052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 12:39:17.367264  455052 cri.go:89] found id: ""
	I0317 12:39:17.367360  455052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 12:39:17.376658  455052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 12:39:17.386258  455052 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 12:39:17.386328  455052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 12:39:17.396059  455052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 12:39:17.396096  455052 kubeadm.go:157] found existing configuration files:
	
	I0317 12:39:17.396152  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 12:39:17.406310  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 12:39:17.406366  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 12:39:17.415771  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 12:39:17.426078  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 12:39:17.426213  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 12:39:17.436888  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 12:39:17.449038  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 12:39:17.449119  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 12:39:17.458740  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 12:39:17.468085  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 12:39:17.468161  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 12:39:17.477138  455052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 12:39:17.536415  455052 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 12:39:17.536762  455052 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 12:39:17.594526  455052 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:39:26.716704  455052 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 12:39:26.716762  455052 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 12:39:26.716880  455052 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 12:39:26.716982  455052 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 12:39:26.717040  455052 kubeadm.go:310] OS: Linux
	I0317 12:39:26.717098  455052 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 12:39:26.717230  455052 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 12:39:26.717289  455052 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 12:39:26.717340  455052 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 12:39:26.717425  455052 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 12:39:26.717511  455052 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 12:39:26.717568  455052 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 12:39:26.717612  455052 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 12:39:26.717656  455052 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 12:39:26.717725  455052 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 12:39:26.717806  455052 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 12:39:26.717918  455052 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 12:39:26.717970  455052 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 12:39:26.719819  455052 out.go:235]   - Generating certificates and keys ...
	I0317 12:39:26.719928  455052 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 12:39:26.719987  455052 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 12:39:26.720057  455052 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 12:39:26.720122  455052 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 12:39:26.720180  455052 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 12:39:26.720234  455052 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 12:39:26.720332  455052 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 12:39:26.720471  455052 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-012219 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:39:26.720520  455052 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 12:39:26.720631  455052 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-012219 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:39:26.720688  455052 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 12:39:26.720755  455052 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 12:39:26.720796  455052 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 12:39:26.720846  455052 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 12:39:26.720892  455052 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 12:39:26.720952  455052 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 12:39:26.721012  455052 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 12:39:26.721067  455052 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 12:39:26.721121  455052 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 12:39:26.721229  455052 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 12:39:26.721313  455052 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 12:39:26.722821  455052 out.go:235]   - Booting up control plane ...
	I0317 12:39:26.722963  455052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 12:39:26.723094  455052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 12:39:26.723204  455052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 12:39:26.723394  455052 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:39:26.723522  455052 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:39:26.723578  455052 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 12:39:26.723736  455052 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 12:39:26.723882  455052 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:39:26.723962  455052 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075708ms
	I0317 12:39:26.724033  455052 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 12:39:26.724123  455052 kubeadm.go:310] [api-check] The API server is healthy after 5.001229447s
	I0317 12:39:26.724272  455052 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 12:39:26.724491  455052 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 12:39:26.724595  455052 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 12:39:26.724789  455052 kubeadm.go:310] [mark-control-plane] Marking the node addons-012219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 12:39:26.724861  455052 kubeadm.go:310] [bootstrap-token] Using token: bcu5f8.5eu7wklvfllmqleo
	I0317 12:39:26.726327  455052 out.go:235]   - Configuring RBAC rules ...
	I0317 12:39:26.726478  455052 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 12:39:26.726566  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 12:39:26.726720  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 12:39:26.726833  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 12:39:26.726967  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 12:39:26.727121  455052 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 12:39:26.727234  455052 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 12:39:26.727301  455052 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 12:39:26.727347  455052 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 12:39:26.727358  455052 kubeadm.go:310] 
	I0317 12:39:26.727416  455052 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 12:39:26.727427  455052 kubeadm.go:310] 
	I0317 12:39:26.727494  455052 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 12:39:26.727501  455052 kubeadm.go:310] 
	I0317 12:39:26.727533  455052 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 12:39:26.727612  455052 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 12:39:26.727658  455052 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 12:39:26.727667  455052 kubeadm.go:310] 
	I0317 12:39:26.727717  455052 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 12:39:26.727722  455052 kubeadm.go:310] 
	I0317 12:39:26.727762  455052 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 12:39:26.727769  455052 kubeadm.go:310] 
	I0317 12:39:26.727833  455052 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 12:39:26.728012  455052 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 12:39:26.728122  455052 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 12:39:26.728143  455052 kubeadm.go:310] 
	I0317 12:39:26.728244  455052 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 12:39:26.728372  455052 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 12:39:26.728388  455052 kubeadm.go:310] 
	I0317 12:39:26.728496  455052 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bcu5f8.5eu7wklvfllmqleo \
	I0317 12:39:26.728637  455052 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 \
	I0317 12:39:26.728676  455052 kubeadm.go:310] 	--control-plane 
	I0317 12:39:26.728685  455052 kubeadm.go:310] 
	I0317 12:39:26.728798  455052 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 12:39:26.728805  455052 kubeadm.go:310] 
	I0317 12:39:26.728933  455052 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bcu5f8.5eu7wklvfllmqleo \
	I0317 12:39:26.729128  455052 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 
	I0317 12:39:26.729143  455052 cni.go:84] Creating CNI manager for ""
	I0317 12:39:26.729150  455052 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:39:26.730819  455052 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 12:39:26.732461  455052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 12:39:26.737191  455052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 12:39:26.737228  455052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 12:39:26.757414  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 12:39:26.978716  455052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 12:39:26.978832  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:26.978900  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-012219 minikube.k8s.io/updated_at=2025_03_17T12_39_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=addons-012219 minikube.k8s.io/primary=true
	I0317 12:39:26.986869  455052 ops.go:34] apiserver oom_adj: -16
	I0317 12:39:27.069756  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:27.569849  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:28.070194  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:28.570825  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:29.069844  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:29.569896  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:30.070141  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:30.570759  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:30.643541  455052 kubeadm.go:1113] duration metric: took 3.66480023s to wait for elevateKubeSystemPrivileges
	I0317 12:39:30.643580  455052 kubeadm.go:394] duration metric: took 13.315224606s to StartCluster
	I0317 12:39:30.643601  455052 settings.go:142] acquiring lock: {Name:mk72030e2b6f80365da0b928b8b3c5c72d9da724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:30.643729  455052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:39:30.644109  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/kubeconfig: {Name:mk0cd04f754d83d5d928c90de569ec9144a7d4e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:30.644299  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 12:39:30.644348  455052 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:39:30.644427  455052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0317 12:39:30.644597  455052 config.go:182] Loaded profile config "addons-012219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:39:30.644619  455052 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-012219"
	I0317 12:39:30.644633  455052 addons.go:69] Setting yakd=true in profile "addons-012219"
	I0317 12:39:30.644638  455052 addons.go:69] Setting metrics-server=true in profile "addons-012219"
	I0317 12:39:30.644650  455052 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-012219"
	I0317 12:39:30.644666  455052 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-012219"
	I0317 12:39:30.644675  455052 addons.go:238] Setting addon metrics-server=true in "addons-012219"
	I0317 12:39:30.644693  455052 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-012219"
	I0317 12:39:30.644703  455052 addons.go:69] Setting volcano=true in profile "addons-012219"
	I0317 12:39:30.644708  455052 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-012219"
	I0317 12:39:30.644715  455052 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-012219"
	I0317 12:39:30.644718  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644723  455052 addons.go:69] Setting volumesnapshots=true in profile "addons-012219"
	I0317 12:39:30.644744  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644751  455052 addons.go:238] Setting addon volumesnapshots=true in "addons-012219"
	I0317 12:39:30.644780  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644796  455052 addons.go:69] Setting registry=true in profile "addons-012219"
	I0317 12:39:30.644813  455052 addons.go:238] Setting addon registry=true in "addons-012219"
	I0317 12:39:30.644837  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645011  455052 addons.go:69] Setting storage-provisioner=true in profile "addons-012219"
	I0317 12:39:30.645041  455052 addons.go:238] Setting addon storage-provisioner=true in "addons-012219"
	I0317 12:39:30.645067  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645121  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645291  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645343  455052 addons.go:69] Setting inspektor-gadget=true in profile "addons-012219"
	I0317 12:39:30.644780  455052 addons.go:69] Setting cloud-spanner=true in profile "addons-012219"
	I0317 12:39:30.645385  455052 addons.go:238] Setting addon inspektor-gadget=true in "addons-012219"
	I0317 12:39:30.645421  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644694  455052 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-012219"
	I0317 12:39:30.645467  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645531  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.644676  455052 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-012219"
	I0317 12:39:30.645706  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644713  455052 addons.go:238] Setting addon volcano=true in "addons-012219"
	I0317 12:39:30.645967  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645977  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645350  455052 addons.go:69] Setting ingress-dns=true in profile "addons-012219"
	I0317 12:39:30.646063  455052 addons.go:238] Setting addon ingress-dns=true in "addons-012219"
	I0317 12:39:30.646099  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.646143  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.646440  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.646632  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645388  455052 addons.go:238] Setting addon cloud-spanner=true in "addons-012219"
	I0317 12:39:30.647227  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645298  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.648194  455052 out.go:177] * Verifying Kubernetes components...
	I0317 12:39:30.648364  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645317  455052 addons.go:69] Setting ingress=true in profile "addons-012219"
	I0317 12:39:30.649855  455052 addons.go:238] Setting addon ingress=true in "addons-012219"
	I0317 12:39:30.645326  455052 addons.go:69] Setting default-storageclass=true in profile "addons-012219"
	I0317 12:39:30.645339  455052 addons.go:69] Setting gcp-auth=true in profile "addons-012219"
	I0317 12:39:30.645355  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.644656  455052 addons.go:238] Setting addon yakd=true in "addons-012219"
	I0317 12:39:30.645937  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.649767  455052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:39:30.649969  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.650227  455052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-012219"
	I0317 12:39:30.645325  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.650682  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.650903  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.656389  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.656877  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.657273  455052 mustload.go:65] Loading cluster: addons-012219
	I0317 12:39:30.673520  455052 out.go:177]   - Using image docker.io/registry:2.8.3
	I0317 12:39:30.674781  455052 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0317 12:39:30.676894  455052 config.go:182] Loaded profile config "addons-012219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:39:30.678394  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.680292  455052 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0317 12:39:30.680336  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0317 12:39:30.680404  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.690141  455052 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0317 12:39:30.694421  455052 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0317 12:39:30.694452  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0317 12:39:30.694532  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.699029  455052 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
	I0317 12:39:30.700369  455052 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
	I0317 12:39:30.701547  455052 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
	I0317 12:39:30.704584  455052 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0317 12:39:30.704617  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
	I0317 12:39:30.704799  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.732527  455052 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0317 12:39:30.734198  455052 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0317 12:39:30.734222  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0317 12:39:30.734288  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.734764  455052 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0317 12:39:30.736165  455052 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0317 12:39:30.736197  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0317 12:39:30.736211  455052 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0317 12:39:30.736280  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.737506  455052 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0317 12:39:30.737526  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0317 12:39:30.737582  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.737799  455052 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-012219"
	I0317 12:39:30.737854  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.738350  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.741937  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0317 12:39:30.743129  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0317 12:39:30.743158  455052 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0317 12:39:30.743230  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.749229  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0317 12:39:30.751779  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.757697  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0317 12:39:30.759304  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0317 12:39:30.759385  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:39:30.760535  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:39:30.760652  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0317 12:39:30.764350  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0317 12:39:30.764692  455052 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0317 12:39:30.764712  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0317 12:39:30.764783  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.766937  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0317 12:39:30.768032  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0317 12:39:30.769181  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0317 12:39:30.770260  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0317 12:39:30.771185  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0317 12:39:30.771210  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0317 12:39:30.771282  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.774340  455052 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0317 12:39:30.776003  455052 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0317 12:39:30.776032  455052 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0317 12:39:30.776109  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.783240  455052 addons.go:238] Setting addon default-storageclass=true in "addons-012219"
	I0317 12:39:30.783295  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.783728  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.788545  455052 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0317 12:39:30.788572  455052 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 12:39:30.788629  455052 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0317 12:39:30.790038  455052 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0317 12:39:30.790060  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0317 12:39:30.790121  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.789063  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.790720  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0317 12:39:30.790737  455052 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0317 12:39:30.790793  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.791069  455052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:39:30.791082  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 12:39:30.791120  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.797195  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.798718  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.802862  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.806046  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.821704  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.826613  455052 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0317 12:39:30.827797  455052 out.go:177]   - Using image docker.io/busybox:stable
	I0317 12:39:30.827903  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.828417  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.829204  455052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0317 12:39:30.829229  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0317 12:39:30.829288  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.829925  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.831815  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.833757  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.835190  455052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 12:39:30.835212  455052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 12:39:30.835267  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.846343  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	W0317 12:39:30.852133  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:30.852179  455052 retry.go:31] will retry after 165.673275ms: ssh: handshake failed: EOF
	I0317 12:39:30.861530  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.869345  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.870549  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	W0317 12:39:30.871334  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:30.871362  455052 retry.go:31] will retry after 367.240618ms: ssh: handshake failed: EOF
	W0317 12:39:30.872875  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:30.872906  455052 retry.go:31] will retry after 248.60274ms: ssh: handshake failed: EOF
	I0317 12:39:30.966162  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 12:39:30.966299  455052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0317 12:39:31.019574  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:31.019616  455052 retry.go:31] will retry after 457.982718ms: ssh: handshake failed: EOF
	I0317 12:39:31.257175  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0317 12:39:31.267972  455052 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0317 12:39:31.268051  455052 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0317 12:39:31.359257  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0317 12:39:31.359372  455052 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0317 12:39:31.365280  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:39:31.449821  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0317 12:39:31.449865  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0317 12:39:31.462540  455052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0317 12:39:31.462592  455052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0317 12:39:31.551215  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0317 12:39:31.560996  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0317 12:39:31.651199  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0317 12:39:31.656616  455052 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0317 12:39:31.656726  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0317 12:39:31.748151  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0317 12:39:31.748256  455052 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0317 12:39:31.749578  455052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0317 12:39:31.749609  455052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0317 12:39:31.751444  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0317 12:39:31.751522  455052 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0317 12:39:31.755408  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0317 12:39:31.762205  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 12:39:31.766344  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0317 12:39:31.851723  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0317 12:39:31.968271  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0317 12:39:32.060014  455052 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0317 12:39:32.060104  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0317 12:39:32.065631  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0317 12:39:32.065768  455052 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0317 12:39:32.161052  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0317 12:39:32.161164  455052 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0317 12:39:32.169358  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0317 12:39:32.169448  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0317 12:39:32.365698  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0317 12:39:32.464479  455052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0317 12:39:32.464516  455052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0317 12:39:32.467193  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0317 12:39:32.467280  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0317 12:39:32.846142  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0317 12:39:32.955524  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0317 12:39:32.955555  455052 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0317 12:39:32.970714  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0317 12:39:32.970743  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0317 12:39:33.062460  455052 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.096230372s)
	I0317 12:39:33.062510  455052 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0317 12:39:33.063931  455052 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.097605245s)
	I0317 12:39:33.064929  455052 node_ready.go:35] waiting up to 6m0s for node "addons-012219" to be "Ready" ...
	I0317 12:39:33.065239  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.807970086s)
	I0317 12:39:33.068002  455052 node_ready.go:49] node "addons-012219" has status "Ready":"True"
	I0317 12:39:33.068029  455052 node_ready.go:38] duration metric: took 3.064074ms for node "addons-012219" to be "Ready" ...
	I0317 12:39:33.068041  455052 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:39:33.152942  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0317 12:39:33.152978  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0317 12:39:33.160164  455052 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace to be "Ready" ...
	I0317 12:39:33.567628  455052 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-012219" context rescaled to 1 replicas
	I0317 12:39:33.754550  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0317 12:39:33.761198  455052 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:39:33.761298  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0317 12:39:33.852436  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0317 12:39:33.852536  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0317 12:39:34.254529  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0317 12:39:34.254655  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0317 12:39:34.462280  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:39:34.845500  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0317 12:39:34.845537  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0317 12:39:35.248845  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:35.362365  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0317 12:39:35.362400  455052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0317 12:39:35.846643  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.295318375s)
	I0317 12:39:35.846730  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.285697264s)
	I0317 12:39:35.846912  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.481506128s)
	I0317 12:39:35.858428  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0317 12:39:35.858464  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0317 12:39:36.647042  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0317 12:39:36.647081  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0317 12:39:37.065685  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0317 12:39:37.065718  455052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0317 12:39:37.466458  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0317 12:39:37.758106  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:37.762024  455052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0317 12:39:37.762225  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:37.784451  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:38.453145  455052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0317 12:39:38.561607  455052 addons.go:238] Setting addon gcp-auth=true in "addons-012219"
	I0317 12:39:38.561781  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:38.562402  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:38.583369  455052 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0317 12:39:38.583432  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:38.603362  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:39.963042  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.311718597s)
	I0317 12:39:39.963103  455052 addons.go:479] Verifying addon ingress=true in "addons-012219"
	I0317 12:39:39.963146  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.200916336s)
	I0317 12:39:39.963106  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.207671739s)
	I0317 12:39:39.964972  455052 out.go:177] * Verifying ingress addon...
	I0317 12:39:39.967123  455052 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0317 12:39:39.974048  455052 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0317 12:39:39.974085  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:40.168255  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:40.471454  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:40.970837  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:41.549708  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:42.061728  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:42.169314  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:42.550695  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:42.553278  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.786881387s)
	I0317 12:39:42.553444  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.701613468s)
	I0317 12:39:42.553532  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.585225493s)
	I0317 12:39:42.553565  455052 addons.go:479] Verifying addon registry=true in "addons-012219"
	I0317 12:39:42.553630  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.187898413s)
	I0317 12:39:42.553712  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.707451979s)
	I0317 12:39:42.553739  455052 addons.go:479] Verifying addon metrics-server=true in "addons-012219"
	I0317 12:39:42.553800  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.79912533s)
	I0317 12:39:42.553972  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.091594119s)
	W0317 12:39:42.554015  455052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0317 12:39:42.554040  455052 retry.go:31] will retry after 295.085511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0317 12:39:42.555834  455052 out.go:177] * Verifying registry addon...
	I0317 12:39:42.555863  455052 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-012219 service yakd-dashboard -n yakd-dashboard
	
	I0317 12:39:42.557616  455052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0317 12:39:42.581691  455052 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0317 12:39:42.581718  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:42.850218  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:39:42.975974  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:43.075964  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:43.155821  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.689213369s)
	I0317 12:39:43.155936  455052 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-012219"
	I0317 12:39:43.155942  455052 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.572537291s)
	I0317 12:39:43.157617  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:39:43.157777  455052 out.go:177] * Verifying csi-hostpath-driver addon...
	I0317 12:39:43.159814  455052 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0317 12:39:43.160582  455052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0317 12:39:43.160914  455052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0317 12:39:43.160937  455052 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0317 12:39:43.169759  455052 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0317 12:39:43.169793  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:43.261056  455052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0317 12:39:43.261091  455052 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0317 12:39:43.348254  455052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0317 12:39:43.348283  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0317 12:39:43.374483  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0317 12:39:43.471359  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:43.572495  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:43.672384  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:43.971940  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:44.061084  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:44.165056  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:44.471394  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:44.649892  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:44.666193  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:44.751533  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:44.969390  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.119113252s)
	I0317 12:39:44.969492  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.594968733s)
	I0317 12:39:44.971520  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:44.971590  455052 addons.go:479] Verifying addon gcp-auth=true in "addons-012219"
	I0317 12:39:44.973564  455052 out.go:177] * Verifying gcp-auth addon...
	I0317 12:39:44.976076  455052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0317 12:39:44.978975  455052 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0317 12:39:45.072406  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:45.173295  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:45.470631  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:45.561686  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:45.665692  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:45.971717  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:46.072106  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:46.164021  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:46.471361  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:46.561277  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:46.664535  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:46.667048  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:46.971565  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:47.072951  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:47.164423  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:47.471232  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:47.561723  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:47.664038  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:47.971270  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:48.071786  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:48.163686  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:48.471336  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:48.560822  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:48.663757  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:48.971328  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:49.072463  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:49.164533  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:49.166752  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:49.471099  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:49.561783  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:49.664190  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:49.970445  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:50.061306  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:50.164637  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:50.471009  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:50.561219  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:50.664148  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:50.971530  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:51.072842  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:51.164105  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:51.471202  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:51.561253  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:51.665220  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:51.668404  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:51.971071  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:52.072052  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:52.164149  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:52.470257  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:52.561883  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:52.663695  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:52.970649  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:53.061766  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:53.164768  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:53.470620  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:53.561248  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:53.665611  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:53.971016  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:54.062562  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:54.163544  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:54.165160  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:54.471067  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:54.562262  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:54.664297  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:54.972022  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:55.072859  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:55.164482  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:55.471483  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:55.571635  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:55.672167  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:55.970581  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:56.061517  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:56.163692  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:56.165852  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:56.470891  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:56.560840  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:56.664424  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:56.971609  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:57.071944  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:57.172129  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:57.471492  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:57.561328  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:57.664371  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:57.971765  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:58.061136  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:58.164717  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:58.167169  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:58.470811  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:58.561875  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:58.664227  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:58.971242  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:59.071723  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:59.173224  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:59.470583  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:59.561582  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:59.663620  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:59.974228  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:00.075968  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:00.181322  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:00.182002  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:00.557628  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:00.560801  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:00.664499  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:00.972439  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:01.073207  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:01.165021  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:01.470938  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:01.562561  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:01.664967  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:01.972110  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:02.073252  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:02.174502  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:02.471130  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:02.561466  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:02.665135  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:02.667591  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:02.972192  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:03.061318  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:03.165767  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:03.470986  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:03.572154  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:03.664213  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:03.971487  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:04.069612  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:04.165226  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:04.471284  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:04.561631  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:04.664524  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:04.971508  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:05.073253  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:05.164360  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:05.166454  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:05.470885  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:05.562638  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:05.664366  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:05.971238  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:06.061221  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:06.164909  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:06.471155  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:06.571876  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:06.664237  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:06.971428  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:07.061486  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:07.163639  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:07.470523  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:07.561415  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:07.664399  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:07.666587  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:07.971118  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:08.061747  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:08.163808  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:08.471246  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:08.561010  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:08.664428  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:08.971582  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:09.062098  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:09.164757  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:09.471103  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:09.561332  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:09.663639  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:09.970712  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:10.061070  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:10.164198  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:10.166333  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:10.470978  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:10.561251  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:10.663441  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:10.970760  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:11.061703  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:11.163861  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:11.471220  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:11.561820  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:11.664502  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:11.971329  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:12.061290  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:12.164136  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:12.471314  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:12.562162  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:12.664543  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:12.666877  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:12.971802  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:13.061379  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:13.164009  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:13.472479  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:13.562232  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:13.664301  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:13.970923  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:14.061876  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:14.164543  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:14.471702  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:14.561707  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:14.664669  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:14.971484  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:15.072515  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:15.163628  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:15.165978  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:15.471504  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:15.561274  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:15.664049  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:15.971437  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:16.061094  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:16.164746  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:16.471713  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:16.561773  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:16.664010  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:16.971221  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:17.061407  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:17.163749  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:17.166163  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:17.471449  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:17.561476  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:17.663628  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:17.971714  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:18.060744  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:18.163928  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:18.470532  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:18.561315  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:18.663470  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:18.971755  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:19.073190  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:19.166677  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:19.173869  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:19.471193  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:19.561520  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:19.664041  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:19.971753  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:20.061098  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:20.164104  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:20.470553  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:20.561338  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:20.663723  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:20.971982  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:21.060631  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:21.164519  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:21.166782  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:21.471230  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:21.560804  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:21.663752  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:21.970837  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:22.061546  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:22.164273  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:22.471264  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:22.561425  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:22.663649  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:22.971476  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:23.061583  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:23.163873  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:23.471076  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:23.561162  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:23.664420  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:23.666843  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:23.971320  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:24.072435  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:24.164277  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:24.470642  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:24.560842  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:24.663920  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:24.973709  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:25.076391  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:25.163684  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:25.470529  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:25.561505  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:25.663906  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:25.971534  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:26.072707  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:26.163741  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:26.166287  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:26.470597  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:26.561589  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:26.664129  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:26.970465  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:27.061507  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:27.163899  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:27.471149  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:27.561493  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:27.663736  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:27.971115  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:28.061301  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:28.163891  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:28.166793  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:28.471391  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:28.560643  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:28.664060  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:28.981446  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:29.081852  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:29.164091  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:29.470388  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:29.561673  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:29.663920  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:29.973060  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:30.060543  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:30.163606  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:30.471522  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:30.560621  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:30.664061  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:30.666611  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:30.990451  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:31.091406  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:31.164809  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:31.470756  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:31.561714  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:31.663874  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:31.971016  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:32.061564  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:32.164859  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:32.471579  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:32.561549  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:32.664019  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:32.970709  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:33.071305  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:33.164531  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:33.167039  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:33.470693  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:33.561678  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:33.664618  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:34.007171  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:34.061865  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:34.164041  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:34.470895  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:34.562349  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:34.663767  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:34.971330  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:35.061022  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:35.164413  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:35.471061  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:35.560933  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:35.664236  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:35.666602  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:35.987282  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:36.061034  455052 kapi.go:107] duration metric: took 53.503416924s to wait for kubernetes.io/minikube-addons=registry ...
	I0317 12:40:36.164007  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:36.470388  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:36.664609  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:36.971153  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:37.164856  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:37.470875  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:37.663839  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:37.970590  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:38.164550  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:38.166315  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:38.471165  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:38.663884  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:38.970790  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:39.173617  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:39.176004  455052 pod_ready.go:93] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.176032  455052 pod_ready.go:82] duration metric: took 1m6.015822117s for pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.176044  455052 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d2bx4" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.192981  455052 pod_ready.go:93] pod "coredns-668d6bf9bc-d2bx4" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.193007  455052 pod_ready.go:82] duration metric: took 16.956484ms for pod "coredns-668d6bf9bc-d2bx4" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.193018  455052 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.195263  455052 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gf4gw" not found
	I0317 12:40:39.195295  455052 pod_ready.go:82] duration metric: took 2.270605ms for pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace to be "Ready" ...
	E0317 12:40:39.195306  455052 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gf4gw" not found
	I0317 12:40:39.195313  455052 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.200296  455052 pod_ready.go:93] pod "etcd-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.200357  455052 pod_ready.go:82] duration metric: took 5.035892ms for pod "etcd-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.200375  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.205221  455052 pod_ready.go:93] pod "kube-apiserver-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.205243  455052 pod_ready.go:82] duration metric: took 4.860483ms for pod "kube-apiserver-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.205253  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.364929  455052 pod_ready.go:93] pod "kube-controller-manager-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.364957  455052 pod_ready.go:82] duration metric: took 159.696703ms for pod "kube-controller-manager-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.364970  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dd72m" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.471469  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:39.664758  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:39.764524  455052 pod_ready.go:93] pod "kube-proxy-dd72m" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.764556  455052 pod_ready.go:82] duration metric: took 399.576924ms for pod "kube-proxy-dd72m" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.764569  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.970483  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:40.164026  455052 pod_ready.go:93] pod "kube-scheduler-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:40.164055  455052 pod_ready.go:82] duration metric: took 399.477967ms for pod "kube-scheduler-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:40.164065  455052 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s96nr" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:40.164021  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:40.565634  455052 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-s96nr" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:40.565663  455052 pod_ready.go:82] duration metric: took 401.590789ms for pod "nvidia-device-plugin-daemonset-s96nr" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:40.565674  455052 pod_ready.go:39] duration metric: took 1m7.497617906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:40:40.565698  455052 api_server.go:52] waiting for apiserver process to appear ...
	I0317 12:40:40.565750  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:40.565773  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:40:40.565833  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:40:40.606351  455052 cri.go:89] found id: "bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:40.606386  455052 cri.go:89] found id: ""
	I0317 12:40:40.606397  455052 logs.go:282] 1 containers: [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983]
	I0317 12:40:40.606449  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.610255  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:40:40.610344  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:40:40.646690  455052 cri.go:89] found id: "0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:40.646721  455052 cri.go:89] found id: ""
	I0317 12:40:40.646732  455052 logs.go:282] 1 containers: [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4]
	I0317 12:40:40.646797  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.650751  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:40:40.650819  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:40:40.664837  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:40.694081  455052 cri.go:89] found id: "9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:40.694109  455052 cri.go:89] found id: ""
	I0317 12:40:40.694120  455052 logs.go:282] 1 containers: [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056]
	I0317 12:40:40.694186  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.698401  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:40:40.698484  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:40:40.736966  455052 cri.go:89] found id: "5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:40.736995  455052 cri.go:89] found id: ""
	I0317 12:40:40.737006  455052 logs.go:282] 1 containers: [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42]
	I0317 12:40:40.737055  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.741396  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:40:40.741478  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:40:40.780872  455052 cri.go:89] found id: "d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:40.780909  455052 cri.go:89] found id: ""
	I0317 12:40:40.780917  455052 logs.go:282] 1 containers: [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3]
	I0317 12:40:40.780965  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.785035  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:40:40.785132  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:40:40.822415  455052 cri.go:89] found id: "379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:40.822441  455052 cri.go:89] found id: ""
	I0317 12:40:40.822450  455052 logs.go:282] 1 containers: [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8]
	I0317 12:40:40.822502  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.826522  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:40:40.826592  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:40:40.863344  455052 cri.go:89] found id: "9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:40.863373  455052 cri.go:89] found id: ""
	I0317 12:40:40.863384  455052 logs.go:282] 1 containers: [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649]
	I0317 12:40:40.863447  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.867562  455052 logs.go:123] Gathering logs for container status ...
	I0317 12:40:40.867591  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:40:40.911123  455052 logs.go:123] Gathering logs for kubelet ...
	I0317 12:40:40.911167  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:40:40.985490  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:41.012076  455052 logs.go:123] Gathering logs for kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] ...
	I0317 12:40:41.012172  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:41.076592  455052 logs.go:123] Gathering logs for coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] ...
	I0317 12:40:41.076640  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:41.121182  455052 logs.go:123] Gathering logs for kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] ...
	I0317 12:40:41.121224  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:41.164748  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:41.188610  455052 logs.go:123] Gathering logs for dmesg ...
	I0317 12:40:41.188658  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:40:41.216301  455052 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:40:41.216368  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:40:41.454361  455052 logs.go:123] Gathering logs for etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] ...
	I0317 12:40:41.454423  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:41.471030  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:41.502158  455052 logs.go:123] Gathering logs for kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] ...
	I0317 12:40:41.502206  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:41.546833  455052 logs.go:123] Gathering logs for kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] ...
	I0317 12:40:41.546886  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:41.586006  455052 logs.go:123] Gathering logs for kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] ...
	I0317 12:40:41.586050  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:41.626915  455052 logs.go:123] Gathering logs for containerd ...
	I0317 12:40:41.626946  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:40:41.664917  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:41.971006  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:42.163415  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:42.470573  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:42.664852  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:42.971232  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:43.165271  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:43.471719  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:43.663641  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:44.046664  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:44.164908  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:44.197012  455052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:40:44.213431  455052 api_server.go:72] duration metric: took 1m13.569048658s to wait for apiserver process to appear ...
	I0317 12:40:44.213465  455052 api_server.go:88] waiting for apiserver healthz status ...
	I0317 12:40:44.213525  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:40:44.213592  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:40:44.250956  455052 cri.go:89] found id: "bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:44.250984  455052 cri.go:89] found id: ""
	I0317 12:40:44.250991  455052 logs.go:282] 1 containers: [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983]
	I0317 12:40:44.251037  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.255218  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:40:44.255300  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:40:44.292739  455052 cri.go:89] found id: "0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:44.292764  455052 cri.go:89] found id: ""
	I0317 12:40:44.292773  455052 logs.go:282] 1 containers: [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4]
	I0317 12:40:44.292837  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.297234  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:40:44.297309  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:40:44.334012  455052 cri.go:89] found id: "9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:44.334037  455052 cri.go:89] found id: ""
	I0317 12:40:44.334045  455052 logs.go:282] 1 containers: [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056]
	I0317 12:40:44.334109  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.339441  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:40:44.339535  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:40:44.377185  455052 cri.go:89] found id: "5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:44.377208  455052 cri.go:89] found id: ""
	I0317 12:40:44.377216  455052 logs.go:282] 1 containers: [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42]
	I0317 12:40:44.377270  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.381213  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:40:44.381307  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:40:44.419209  455052 cri.go:89] found id: "d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:44.419236  455052 cri.go:89] found id: ""
	I0317 12:40:44.419246  455052 logs.go:282] 1 containers: [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3]
	I0317 12:40:44.419304  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.423259  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:40:44.423334  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:40:44.461215  455052 cri.go:89] found id: "379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:44.461241  455052 cri.go:89] found id: ""
	I0317 12:40:44.461250  455052 logs.go:282] 1 containers: [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8]
	I0317 12:40:44.461313  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.465079  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:40:44.465172  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:40:44.470298  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:44.505756  455052 cri.go:89] found id: "9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:44.505789  455052 cri.go:89] found id: ""
	I0317 12:40:44.505800  455052 logs.go:282] 1 containers: [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649]
	I0317 12:40:44.505862  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.510159  455052 logs.go:123] Gathering logs for kubelet ...
	I0317 12:40:44.510194  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:40:44.612068  455052 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:40:44.612121  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:40:44.665305  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:44.711514  455052 logs.go:123] Gathering logs for kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] ...
	I0317 12:40:44.711552  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:44.774357  455052 logs.go:123] Gathering logs for coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] ...
	I0317 12:40:44.774406  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:44.813382  455052 logs.go:123] Gathering logs for kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] ...
	I0317 12:40:44.813419  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:44.863365  455052 logs.go:123] Gathering logs for kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] ...
	I0317 12:40:44.863421  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:44.905079  455052 logs.go:123] Gathering logs for kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] ...
	I0317 12:40:44.905112  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:44.970391  455052 logs.go:123] Gathering logs for container status ...
	I0317 12:40:44.970440  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:40:44.971654  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:45.078232  455052 logs.go:123] Gathering logs for dmesg ...
	I0317 12:40:45.078288  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:40:45.106924  455052 logs.go:123] Gathering logs for etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] ...
	I0317 12:40:45.106966  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:45.164276  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:45.188075  455052 logs.go:123] Gathering logs for kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] ...
	I0317 12:40:45.188122  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:45.295976  455052 logs.go:123] Gathering logs for containerd ...
	I0317 12:40:45.296037  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:40:45.471680  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:45.664738  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:45.972089  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:46.166202  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:46.472051  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:46.664657  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:46.970725  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:47.164553  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:47.471404  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:47.664810  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:47.873980  455052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0317 12:40:47.878931  455052 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0317 12:40:47.879917  455052 api_server.go:141] control plane version: v1.32.2
	I0317 12:40:47.879944  455052 api_server.go:131] duration metric: took 3.666470439s to wait for apiserver health ...
	I0317 12:40:47.879951  455052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 12:40:47.879974  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:40:47.880026  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:40:47.918549  455052 cri.go:89] found id: "bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:47.918579  455052 cri.go:89] found id: ""
	I0317 12:40:47.918589  455052 logs.go:282] 1 containers: [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983]
	I0317 12:40:47.918637  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:47.922513  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:40:47.922593  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:40:47.966599  455052 cri.go:89] found id: "0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:47.966717  455052 cri.go:89] found id: ""
	I0317 12:40:47.966734  455052 logs.go:282] 1 containers: [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4]
	I0317 12:40:47.966814  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:47.972421  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:47.974109  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:40:47.974185  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:40:48.083818  455052 cri.go:89] found id: "9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:48.083846  455052 cri.go:89] found id: ""
	I0317 12:40:48.083857  455052 logs.go:282] 1 containers: [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056]
	I0317 12:40:48.083929  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.088505  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:40:48.088582  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:40:48.164562  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:48.182882  455052 cri.go:89] found id: "5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:48.182907  455052 cri.go:89] found id: ""
	I0317 12:40:48.182917  455052 logs.go:282] 1 containers: [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42]
	I0317 12:40:48.182973  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.187292  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:40:48.187364  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:40:48.266843  455052 cri.go:89] found id: "d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:48.266868  455052 cri.go:89] found id: ""
	I0317 12:40:48.266876  455052 logs.go:282] 1 containers: [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3]
	I0317 12:40:48.266924  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.271302  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:40:48.271366  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:40:48.362922  455052 cri.go:89] found id: "379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:48.362948  455052 cri.go:89] found id: ""
	I0317 12:40:48.362958  455052 logs.go:282] 1 containers: [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8]
	I0317 12:40:48.363018  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.366903  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:40:48.366987  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:40:48.445208  455052 cri.go:89] found id: "9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:48.445239  455052 cri.go:89] found id: ""
	I0317 12:40:48.445250  455052 logs.go:282] 1 containers: [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649]
	I0317 12:40:48.445316  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.449909  455052 logs.go:123] Gathering logs for kubelet ...
	I0317 12:40:48.449941  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:40:48.470951  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:48.564227  455052 logs.go:123] Gathering logs for dmesg ...
	I0317 12:40:48.564287  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:40:48.594274  455052 logs.go:123] Gathering logs for etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] ...
	I0317 12:40:48.594327  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:48.666652  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:48.687502  455052 logs.go:123] Gathering logs for coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] ...
	I0317 12:40:48.687545  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:48.866713  455052 logs.go:123] Gathering logs for kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] ...
	I0317 12:40:48.866758  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:48.974564  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:49.093453  455052 logs.go:123] Gathering logs for container status ...
	I0317 12:40:49.093505  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:40:49.164436  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:49.278407  455052 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:40:49.278448  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:40:49.471253  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:49.656290  455052 logs.go:123] Gathering logs for kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] ...
	I0317 12:40:49.656363  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:49.666846  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:49.971519  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:49.985522  455052 logs.go:123] Gathering logs for kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] ...
	I0317 12:40:49.985575  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:50.080623  455052 logs.go:123] Gathering logs for kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] ...
	I0317 12:40:50.080664  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:50.150398  455052 logs.go:123] Gathering logs for kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] ...
	I0317 12:40:50.150435  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:50.165326  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:50.190479  455052 logs.go:123] Gathering logs for containerd ...
	I0317 12:40:50.190509  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:40:50.501424  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:50.664829  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:51.046634  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:51.165068  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:51.472476  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:51.664153  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:51.971633  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:52.164477  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:52.471024  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:52.664532  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:52.765578  455052 system_pods.go:59] 19 kube-system pods found
	I0317 12:40:52.765623  455052 system_pods.go:61] "amd-gpu-device-plugin-vshjt" [f90dc780-3781-4dfa-aa72-9f01de540522] Running
	I0317 12:40:52.765632  455052 system_pods.go:61] "coredns-668d6bf9bc-d2bx4" [3984c722-20f8-4593-8acc-69f7a96879cc] Running
	I0317 12:40:52.765637  455052 system_pods.go:61] "csi-hostpath-attacher-0" [6409109a-02f5-4560-a0ce-ff758742667a] Running
	I0317 12:40:52.765642  455052 system_pods.go:61] "csi-hostpath-resizer-0" [db7fcb3f-a582-496d-8f39-b4b58ac628a9] Running
	I0317 12:40:52.765654  455052 system_pods.go:61] "csi-hostpathplugin-dxflx" [e4429700-36d8-4fe3-8ee4-ec430215ad55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0317 12:40:52.765663  455052 system_pods.go:61] "etcd-addons-012219" [02330569-8d20-41f9-b759-63f8904f2f4b] Running
	I0317 12:40:52.765675  455052 system_pods.go:61] "kindnet-cz7dg" [3e28249d-348a-4e40-b2b3-8b46b677ac10] Running
	I0317 12:40:52.765680  455052 system_pods.go:61] "kube-apiserver-addons-012219" [d053686d-1bfb-4c4f-83a6-9550a5c50bef] Running
	I0317 12:40:52.765690  455052 system_pods.go:61] "kube-controller-manager-addons-012219" [cfe19195-6038-44ef-93e8-2a3a1fa0eeb6] Running
	I0317 12:40:52.765704  455052 system_pods.go:61] "kube-ingress-dns-minikube" [8cd48e7c-55b2-4237-b945-d6e7be8d3040] Running
	I0317 12:40:52.765710  455052 system_pods.go:61] "kube-proxy-dd72m" [3c1ba3e7-f0a0-4520-ac21-293d84b96937] Running
	I0317 12:40:52.765714  455052 system_pods.go:61] "kube-scheduler-addons-012219" [f2a8c619-ebae-4ab1-9e80-476b5bc94a7c] Running
	I0317 12:40:52.765722  455052 system_pods.go:61] "metrics-server-7fbb699795-rmd9f" [457e13af-aba0-4869-9953-d240bdcf8c93] Running
	I0317 12:40:52.765727  455052 system_pods.go:61] "nvidia-device-plugin-daemonset-s96nr" [dd2959e8-cb33-4011-825c-beffbbfe67f2] Running
	I0317 12:40:52.765735  455052 system_pods.go:61] "registry-6c88467877-qxwgl" [455262b9-8f7c-405f-8f6a-e11619b4a82b] Running
	I0317 12:40:52.765740  455052 system_pods.go:61] "registry-proxy-6mr4n" [1ff4a6b3-772a-4bb4-b071-5fda919d74bb] Running
	I0317 12:40:52.765749  455052 system_pods.go:61] "snapshot-controller-68b874b76f-kqqj4" [4c44d0a7-10b5-4560-b08a-547f48a9d788] Running
	I0317 12:40:52.765754  455052 system_pods.go:61] "snapshot-controller-68b874b76f-vg6qw" [83b1a84d-5b5d-4f61-9899-b115352819b6] Running
	I0317 12:40:52.765762  455052 system_pods.go:61] "storage-provisioner" [2308e1c7-7aa6-49b3-ac63-c49fdf64fced] Running
	I0317 12:40:52.765771  455052 system_pods.go:74] duration metric: took 4.885812296s to wait for pod list to return data ...
	I0317 12:40:52.765785  455052 default_sa.go:34] waiting for default service account to be created ...
	I0317 12:40:52.768498  455052 default_sa.go:45] found service account: "default"
	I0317 12:40:52.768530  455052 default_sa.go:55] duration metric: took 2.736413ms for default service account to be created ...
	I0317 12:40:52.768543  455052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 12:40:52.772352  455052 system_pods.go:86] 19 kube-system pods found
	I0317 12:40:52.772393  455052 system_pods.go:89] "amd-gpu-device-plugin-vshjt" [f90dc780-3781-4dfa-aa72-9f01de540522] Running
	I0317 12:40:52.772404  455052 system_pods.go:89] "coredns-668d6bf9bc-d2bx4" [3984c722-20f8-4593-8acc-69f7a96879cc] Running
	I0317 12:40:52.772412  455052 system_pods.go:89] "csi-hostpath-attacher-0" [6409109a-02f5-4560-a0ce-ff758742667a] Running
	I0317 12:40:52.772417  455052 system_pods.go:89] "csi-hostpath-resizer-0" [db7fcb3f-a582-496d-8f39-b4b58ac628a9] Running
	I0317 12:40:52.772427  455052 system_pods.go:89] "csi-hostpathplugin-dxflx" [e4429700-36d8-4fe3-8ee4-ec430215ad55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0317 12:40:52.772438  455052 system_pods.go:89] "etcd-addons-012219" [02330569-8d20-41f9-b759-63f8904f2f4b] Running
	I0317 12:40:52.772445  455052 system_pods.go:89] "kindnet-cz7dg" [3e28249d-348a-4e40-b2b3-8b46b677ac10] Running
	I0317 12:40:52.772453  455052 system_pods.go:89] "kube-apiserver-addons-012219" [d053686d-1bfb-4c4f-83a6-9550a5c50bef] Running
	I0317 12:40:52.772459  455052 system_pods.go:89] "kube-controller-manager-addons-012219" [cfe19195-6038-44ef-93e8-2a3a1fa0eeb6] Running
	I0317 12:40:52.772469  455052 system_pods.go:89] "kube-ingress-dns-minikube" [8cd48e7c-55b2-4237-b945-d6e7be8d3040] Running
	I0317 12:40:52.772474  455052 system_pods.go:89] "kube-proxy-dd72m" [3c1ba3e7-f0a0-4520-ac21-293d84b96937] Running
	I0317 12:40:52.772482  455052 system_pods.go:89] "kube-scheduler-addons-012219" [f2a8c619-ebae-4ab1-9e80-476b5bc94a7c] Running
	I0317 12:40:52.772488  455052 system_pods.go:89] "metrics-server-7fbb699795-rmd9f" [457e13af-aba0-4869-9953-d240bdcf8c93] Running
	I0317 12:40:52.772500  455052 system_pods.go:89] "nvidia-device-plugin-daemonset-s96nr" [dd2959e8-cb33-4011-825c-beffbbfe67f2] Running
	I0317 12:40:52.772507  455052 system_pods.go:89] "registry-6c88467877-qxwgl" [455262b9-8f7c-405f-8f6a-e11619b4a82b] Running
	I0317 12:40:52.772513  455052 system_pods.go:89] "registry-proxy-6mr4n" [1ff4a6b3-772a-4bb4-b071-5fda919d74bb] Running
	I0317 12:40:52.772520  455052 system_pods.go:89] "snapshot-controller-68b874b76f-kqqj4" [4c44d0a7-10b5-4560-b08a-547f48a9d788] Running
	I0317 12:40:52.772525  455052 system_pods.go:89] "snapshot-controller-68b874b76f-vg6qw" [83b1a84d-5b5d-4f61-9899-b115352819b6] Running
	I0317 12:40:52.772538  455052 system_pods.go:89] "storage-provisioner" [2308e1c7-7aa6-49b3-ac63-c49fdf64fced] Running
	I0317 12:40:52.772548  455052 system_pods.go:126] duration metric: took 3.997984ms to wait for k8s-apps to be running ...
	I0317 12:40:52.772565  455052 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 12:40:52.772634  455052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:40:52.787084  455052 system_svc.go:56] duration metric: took 14.507575ms WaitForService to wait for kubelet
	I0317 12:40:52.787128  455052 kubeadm.go:582] duration metric: took 1m22.14274535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:40:52.787164  455052 node_conditions.go:102] verifying NodePressure condition ...
	I0317 12:40:52.790315  455052 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0317 12:40:52.790344  455052 node_conditions.go:123] node cpu capacity is 8
	I0317 12:40:52.790361  455052 node_conditions.go:105] duration metric: took 3.191982ms to run NodePressure ...
	I0317 12:40:52.790375  455052 start.go:241] waiting for startup goroutines ...
	I0317 12:40:52.970590  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:53.163873  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:53.471714  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:53.663961  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:53.971638  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:54.165249  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:54.471187  455052 kapi.go:107] duration metric: took 1m14.504065676s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0317 12:40:54.663724  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:55.181177  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:55.664966  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:56.164506  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:56.664667  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:57.164624  455052 kapi.go:107] duration metric: took 1m14.004040538s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0317 12:41:06.981158  455052 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0317 12:41:06.981192  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:07.479529  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:07.980481  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:08.479897  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:08.979569  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:09.480236  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:09.979644  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:10.480729  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:10.979778  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:11.480055  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:11.979330  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:12.479999  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:12.979564  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:13.479411  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:13.979645  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:14.479839  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:14.979566  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:15.480082  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:15.979724  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:16.479924  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:16.979445  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:17.480370  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:17.980036  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:18.479816  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:18.979426  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:19.480457  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:19.980135  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:20.479132  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:20.980655  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:21.480199  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:21.979202  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:22.479369  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:22.980251  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:23.480213  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:23.979330  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:24.479078  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:24.979331  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:25.481117  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:25.980111  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:26.479891  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:26.979293  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:27.479178  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:27.980039  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:28.479214  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:28.979143  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:29.479377  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:29.980164  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:30.479590  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:30.980763  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:31.479508  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:31.980501  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:32.479742  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:32.979400  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:33.480015  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:33.979488  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:34.479529  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:34.979633  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:35.480481  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:35.979528  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:36.480373  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:36.979487  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:37.480121  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:37.979618  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:38.478950  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:38.980157  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:39.479388  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:39.979684  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:40.480541  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:40.979353  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:41.479656  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:41.980519  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:42.480066  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:42.979303  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:43.479590  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:43.980298  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:44.479479  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:44.979931  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:45.479240  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:45.979460  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:46.480270  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:46.979651  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:47.480643  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:47.979905  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:48.479669  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:48.980409  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:49.479066  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:49.979654  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:50.480200  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:50.980176  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:51.479890  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:51.979954  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:52.479149  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:52.979163  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:53.479477  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:53.979835  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:54.479007  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:54.979434  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:55.480051  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:55.979962  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:56.479685  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:56.980475  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:57.479720  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:57.979324  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:58.479969  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:58.978987  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:59.479112  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:59.979505  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:00.479949  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:00.979559  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:01.480280  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:01.980377  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:02.479976  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:02.979939  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:03.479966  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:03.980399  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:04.479483  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:04.979513  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:05.480468  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:05.980122  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:06.479553  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:06.980746  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:07.480116  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:07.979701  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:08.479565  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:08.980110  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:09.479313  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:09.980207  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:10.479669  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:10.980481  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:11.480544  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:11.980262  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:12.479701  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:12.980212  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:13.480124  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:13.979706  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:14.480678  455052 kapi.go:107] duration metric: took 2m29.504600411s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0317 12:42:14.482276  455052 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-012219 cluster.
	I0317 12:42:14.483771  455052 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0317 12:42:14.485152  455052 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0317 12:42:14.486680  455052 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, volcano, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0317 12:42:14.487909  455052 addons.go:514] duration metric: took 2m43.843476509s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns default-storageclass volcano inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0317 12:42:14.487990  455052 start.go:246] waiting for cluster config update ...
	I0317 12:42:14.488027  455052 start.go:255] writing updated cluster config ...
	I0317 12:42:14.488433  455052 ssh_runner.go:195] Run: rm -f paused
	I0317 12:42:14.545367  455052 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 12:42:14.547125  455052 out.go:177] * Done! kubectl is now configured to use "addons-012219" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72f0986b33bff       56cc512116c8f       8 minutes ago       Running             busybox                   0                   c2583758c78ae       busybox
	f883a246b949a       ee44bc2368033       10 minutes ago      Running             controller                0                   33fb94840828b       ingress-nginx-controller-56d7c84fd4-4z52c
	3b2f143a08370       a62eeff05ba51       11 minutes ago      Exited              patch                     0                   8be1f7fa66df6       ingress-nginx-admission-patch-t76b7
	a3b120baf6d20       a62eeff05ba51       11 minutes ago      Exited              create                    0                   bb57596d33854       ingress-nginx-admission-create-l6q8w
	9af85dc8bdce7       c69fa2e9cbf5f       11 minutes ago      Running             coredns                   0                   fdb2ccd4b262c       coredns-668d6bf9bc-d2bx4
	c9fa71bfc47e4       30dd67412fdea       11 minutes ago      Running             minikube-ingress-dns      0                   11e8588abc22d       kube-ingress-dns-minikube
	9b64fcbeb6014       df3849d954c98       12 minutes ago      Running             kindnet-cni               0                   75ed382f3859f       kindnet-cz7dg
	7e31db05a70a8       6e38f40d628db       12 minutes ago      Running             storage-provisioner       0                   73e2163ca213c       storage-provisioner
	d3a2f527a6876       f1332858868e1       12 minutes ago      Running             kube-proxy                0                   0356e8b8272c6       kube-proxy-dd72m
	bb5f00b762560       85b7a174738ba       12 minutes ago      Running             kube-apiserver            0                   37827ade0909f       kube-apiserver-addons-012219
	0c8a01ff0ac04       a9e7e6b294baf       12 minutes ago      Running             etcd                      0                   5301a69037bec       etcd-addons-012219
	379f28506a876       b6a454c5a800d       12 minutes ago      Running             kube-controller-manager   0                   e29b57dbb448a       kube-controller-manager-addons-012219
	5e2a09620775b       d8e673e7c9983       12 minutes ago      Running             kube-scheduler            0                   07e0a925b2596       kube-scheduler-addons-012219
	
	
	==> containerd <==
	Mar 17 12:46:58 addons-012219 containerd[864]: time="2025-03-17T12:46:58.429161026Z" level=warning msg="cleaning up after shim disconnected" id=1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7 namespace=k8s.io
	Mar 17 12:46:58 addons-012219 containerd[864]: time="2025-03-17T12:46:58.429171944Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 17 12:46:58 addons-012219 containerd[864]: time="2025-03-17T12:46:58.477223906Z" level=info msg="TearDown network for sandbox \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\" successfully"
	Mar 17 12:46:58 addons-012219 containerd[864]: time="2025-03-17T12:46:58.477260485Z" level=info msg="StopPodSandbox for \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\" returns successfully"
	Mar 17 12:46:59 addons-012219 containerd[864]: time="2025-03-17T12:46:59.357212676Z" level=info msg="RemoveContainer for \"2e445477a0d3a37219a92971ee9fea74bc82f78069f708a897dff3ed7d1c74f7\""
	Mar 17 12:46:59 addons-012219 containerd[864]: time="2025-03-17T12:46:59.362970878Z" level=info msg="RemoveContainer for \"2e445477a0d3a37219a92971ee9fea74bc82f78069f708a897dff3ed7d1c74f7\" returns successfully"
	Mar 17 12:46:59 addons-012219 containerd[864]: time="2025-03-17T12:46:59.363640568Z" level=error msg="ContainerStatus for \"2e445477a0d3a37219a92971ee9fea74bc82f78069f708a897dff3ed7d1c74f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e445477a0d3a37219a92971ee9fea74bc82f78069f708a897dff3ed7d1c74f7\": not found"
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.767491910Z" level=info msg="StopPodSandbox for \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\""
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.776840669Z" level=info msg="TearDown network for sandbox \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\" successfully"
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.776894859Z" level=info msg="StopPodSandbox for \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\" returns successfully"
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.777462331Z" level=info msg="RemovePodSandbox for \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\""
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.777511658Z" level=info msg="Forcibly stopping sandbox \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\""
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.787172523Z" level=info msg="TearDown network for sandbox \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\" successfully"
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.792244737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Mar 17 12:47:26 addons-012219 containerd[864]: time="2025-03-17T12:47:26.792406962Z" level=info msg="RemovePodSandbox \"1cad4a4a9bc312dab586ba3f4b3a521ea09319db2dd4b6264d56b403f0b5f3e7\" returns successfully"
	Mar 17 12:49:03 addons-012219 containerd[864]: time="2025-03-17T12:49:03.989435050Z" level=info msg="PullImage \"busybox:stable\""
	Mar 17 12:49:03 addons-012219 containerd[864]: time="2025-03-17T12:49:03.991493051Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:49:04 addons-012219 containerd[864]: time="2025-03-17T12:49:04.686298751Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:49:06 addons-012219 containerd[864]: time="2025-03-17T12:49:06.969364470Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 12:49:06 addons-012219 containerd[864]: time="2025-03-17T12:49:06.969442995Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=21179"
	Mar 17 12:49:28 addons-012219 containerd[864]: time="2025-03-17T12:49:28.988957824Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Mar 17 12:49:28 addons-012219 containerd[864]: time="2025-03-17T12:49:28.991004126Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:49:29 addons-012219 containerd[864]: time="2025-03-17T12:49:29.664579275Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:49:31 addons-012219 containerd[864]: time="2025-03-17T12:49:31.537449216Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 12:49:31 addons-012219 containerd[864]: time="2025-03-17T12:49:31.537522014Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	
	
	==> coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] <==
	[INFO] 10.244.0.16:34216 - 37652 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188479s
	[INFO] 10.244.0.16:43420 - 27799 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004589602s
	[INFO] 10.244.0.16:43420 - 27455 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004741711s
	[INFO] 10.244.0.16:47013 - 51640 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003867777s
	[INFO] 10.244.0.16:47013 - 51331 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004475572s
	[INFO] 10.244.0.16:39077 - 47593 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003471984s
	[INFO] 10.244.0.16:39077 - 47344 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004574207s
	[INFO] 10.244.0.16:39183 - 16956 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144916s
	[INFO] 10.244.0.16:39183 - 17153 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000241434s
	[INFO] 10.244.0.26:34264 - 6039 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000313032s
	[INFO] 10.244.0.26:55315 - 55406 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000421144s
	[INFO] 10.244.0.26:46365 - 29788 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167696s
	[INFO] 10.244.0.26:40754 - 38194 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000205761s
	[INFO] 10.244.0.26:49806 - 38763 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009024s
	[INFO] 10.244.0.26:59680 - 32634 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111996s
	[INFO] 10.244.0.26:35656 - 5418 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007327485s
	[INFO] 10.244.0.26:50153 - 9679 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007599611s
	[INFO] 10.244.0.26:55286 - 36934 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006995331s
	[INFO] 10.244.0.26:55921 - 56025 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007813438s
	[INFO] 10.244.0.26:59506 - 19688 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004925485s
	[INFO] 10.244.0.26:42060 - 55729 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00572541s
	[INFO] 10.244.0.26:42368 - 10263 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002102149s
	[INFO] 10.244.0.26:57842 - 8006 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002237615s
	[INFO] 10.244.0.31:36061 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000299324s
	[INFO] 10.244.0.31:44219 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202677s
	
	
	==> describe nodes <==
	Name:               addons-012219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-012219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=addons-012219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T12_39_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-012219
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:39:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-012219
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 12:51:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 12:48:05 +0000   Mon, 17 Mar 2025 12:39:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 12:48:05 +0000   Mon, 17 Mar 2025 12:39:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 12:48:05 +0000   Mon, 17 Mar 2025 12:39:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 12:48:05 +0000   Mon, 17 Mar 2025 12:39:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-012219
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 513f2da8e2ab4b528f20355459ada2cc
	  System UUID:                718990bd-83c1-42aa-9bb1-42fb8bb0fb09
	  Boot ID:                    40219139-515e-4d1c-86e4-bab1900bd49a
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m48s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-4z52c    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         12m
	  kube-system                 coredns-668d6bf9bc-d2bx4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-012219                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-cz7dg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-012219                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-012219        200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-dd72m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-012219                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-012219 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-012219 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-012219 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-012219 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-012219 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-012219 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-012219 event: Registered Node addons-012219 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +2.171804] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000008] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000005] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000004] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.047810] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000009] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000001] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000011] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000008] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[Mar17 12:32] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000007] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.043860] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000003] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	
	
	==> etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] <==
	{"level":"info","ts":"2025-03-17T12:40:39.171794Z","caller":"traceutil/trace.go:171","msg":"trace[793474553] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1241; }","duration":"102.087943ms","start":"2025-03-17T12:40:39.069692Z","end":"2025-03-17T12:40:39.171780Z","steps":["trace[793474553] 'agreement among raft nodes before linearized reading'  (duration: 101.727159ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:40:41.445281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.039026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/addons-012219\" limit:1 ","response":"range_response_count:1 size:555"}
	{"level":"info","ts":"2025-03-17T12:40:41.445349Z","caller":"traceutil/trace.go:171","msg":"trace[1550817072] range","detail":"{range_begin:/registry/leases/kube-node-lease/addons-012219; range_end:; response_count:1; response_revision:1251; }","duration":"129.145148ms","start":"2025-03-17T12:40:41.316187Z","end":"2025-03-17T12:40:41.445332Z","steps":["trace[1550817072] 'range keys from in-memory index tree'  (duration: 128.87378ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:41:21.112639Z","caller":"traceutil/trace.go:171","msg":"trace[1120577109] transaction","detail":"{read_only:false; response_revision:1413; number_of_response:1; }","duration":"105.170138ms","start":"2025-03-17T12:41:21.007444Z","end":"2025-03-17T12:41:21.112614Z","steps":["trace[1120577109] 'process raft request'  (duration: 105.008345ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:42:45.196439Z","caller":"traceutil/trace.go:171","msg":"trace[1142690297] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1660; }","duration":"109.500712ms","start":"2025-03-17T12:42:45.086909Z","end":"2025-03-17T12:42:45.196409Z","steps":["trace[1142690297] 'process raft request'  (duration: 109.140613ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:42:45.401890Z","caller":"traceutil/trace.go:171","msg":"trace[1069886062] linearizableReadLoop","detail":"{readStateIndex:1727; appliedIndex:1726; }","duration":"135.149903ms","start":"2025-03-17T12:42:45.266716Z","end":"2025-03-17T12:42:45.401866Z","steps":["trace[1069886062] 'read index received'  (duration: 60.1132ms)","trace[1069886062] 'applied index is now lower than readState.Index'  (duration: 75.035995ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:42:45.401909Z","caller":"traceutil/trace.go:171","msg":"trace[1395844034] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1663; }","duration":"135.750169ms","start":"2025-03-17T12:42:45.266137Z","end":"2025-03-17T12:42:45.401887Z","steps":["trace[1395844034] 'process raft request'  (duration: 60.729876ms)","trace[1395844034] 'compare'  (duration: 74.868841ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T12:42:45.402090Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.137849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2025-03-17T12:42:45.402137Z","caller":"traceutil/trace.go:171","msg":"trace[1936804532] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1663; }","duration":"135.208194ms","start":"2025-03-17T12:42:45.266916Z","end":"2025-03-17T12:42:45.402125Z","steps":["trace[1936804532] 'agreement among raft nodes before linearized reading'  (duration: 135.123423ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:42:45.402088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.220997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/volcano-system/volcano-scheduler-service\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:42:45.402284Z","caller":"traceutil/trace.go:171","msg":"trace[218181080] range","detail":"{range_begin:/registry/services/endpoints/volcano-system/volcano-scheduler-service; range_end:; response_count:0; response_revision:1663; }","duration":"135.453301ms","start":"2025-03-17T12:42:45.266818Z","end":"2025-03-17T12:42:45.402272Z","steps":["trace[218181080] 'agreement among raft nodes before linearized reading'  (duration: 135.204345ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:42:45.402089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.36442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/volcano-system/volcano-scheduler-service-6m88g\" limit:1 ","response":"range_response_count:1 size:1209"}
	{"level":"info","ts":"2025-03-17T12:42:45.402392Z","caller":"traceutil/trace.go:171","msg":"trace[1412987289] range","detail":"{range_begin:/registry/endpointslices/volcano-system/volcano-scheduler-service-6m88g; range_end:; response_count:1; response_revision:1663; }","duration":"135.692569ms","start":"2025-03-17T12:42:45.266691Z","end":"2025-03-17T12:42:45.402383Z","steps":["trace[1412987289] 'agreement among raft nodes before linearized reading'  (duration: 135.303131ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:42:45.745091Z","caller":"traceutil/trace.go:171","msg":"trace[1704665035] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1684; }","duration":"175.124333ms","start":"2025-03-17T12:42:45.569942Z","end":"2025-03-17T12:42:45.745066Z","steps":["trace[1704665035] 'process raft request'  (duration: 87.948789ms)","trace[1704665035] 'compare'  (duration: 86.814718ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:43:18.625186Z","caller":"traceutil/trace.go:171","msg":"trace[1253148108] linearizableReadLoop","detail":"{readStateIndex:1983; appliedIndex:1982; }","duration":"118.524333ms","start":"2025-03-17T12:43:18.506639Z","end":"2025-03-17T12:43:18.625163Z","steps":["trace[1253148108] 'read index received'  (duration: 60.110846ms)","trace[1253148108] 'applied index is now lower than readState.Index'  (duration: 58.412627ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T12:43:18.625392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.702389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:43:18.625403Z","caller":"traceutil/trace.go:171","msg":"trace[838863475] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1909; }","duration":"120.334731ms","start":"2025-03-17T12:43:18.505045Z","end":"2025-03-17T12:43:18.625380Z","steps":["trace[838863475] 'process raft request'  (duration: 61.76538ms)","trace[838863475] 'compare'  (duration: 58.213746ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:43:18.625450Z","caller":"traceutil/trace.go:171","msg":"trace[1359405829] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:1909; }","duration":"118.802749ms","start":"2025-03-17T12:43:18.506634Z","end":"2025-03-17T12:43:18.625437Z","steps":["trace[1359405829] 'agreement among raft nodes before linearized reading'  (duration: 118.667931ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:43:18.625550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.875558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-6f7db97f95\" limit:1 ","response":"range_response_count:1 size:2926"}
	{"level":"warn","ts":"2025-03-17T12:43:18.625598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.878461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-s96nr\" limit:1 ","response":"range_response_count:1 size:4285"}
	{"level":"info","ts":"2025-03-17T12:43:18.625604Z","caller":"traceutil/trace.go:171","msg":"trace[60376047] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-6f7db97f95; range_end:; response_count:1; response_revision:1909; }","duration":"118.951477ms","start":"2025-03-17T12:43:18.506642Z","end":"2025-03-17T12:43:18.625593Z","steps":["trace[60376047] 'agreement among raft nodes before linearized reading'  (duration: 118.842099ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:43:18.625627Z","caller":"traceutil/trace.go:171","msg":"trace[766425522] range","detail":"{range_begin:/registry/pods/kube-system/nvidia-device-plugin-daemonset-s96nr; range_end:; response_count:1; response_revision:1909; }","duration":"118.933146ms","start":"2025-03-17T12:43:18.506685Z","end":"2025-03-17T12:43:18.625618Z","steps":["trace[766425522] 'agreement among raft nodes before linearized reading'  (duration: 118.825396ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:49:22.205220Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2412}
	{"level":"info","ts":"2025-03-17T12:49:22.274301Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":2412,"took":"68.328298ms","hash":2335236743,"current-db-size-bytes":9986048,"current-db-size":"10 MB","current-db-size-in-use-bytes":3117056,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-03-17T12:49:22.274357Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2335236743,"revision":2412,"compact-revision":-1}
	
	
	==> kernel <==
	 12:51:43 up  2:34,  0 users,  load average: 0.22, 0.51, 1.45
	Linux addons-012219 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] <==
	I0317 12:49:42.645604       1 main.go:301] handling current node
	I0317 12:49:52.647543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:49:52.647615       1 main.go:301] handling current node
	I0317 12:50:02.645183       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:50:02.645231       1 main.go:301] handling current node
	I0317 12:50:12.654202       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:50:12.654244       1 main.go:301] handling current node
	I0317 12:50:22.652417       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:50:22.652464       1 main.go:301] handling current node
	I0317 12:50:32.652450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:50:32.652502       1 main.go:301] handling current node
	I0317 12:50:42.645798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:50:42.645868       1 main.go:301] handling current node
	I0317 12:50:52.645611       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:50:52.645690       1 main.go:301] handling current node
	I0317 12:51:02.652446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:51:02.652490       1 main.go:301] handling current node
	I0317 12:51:12.647976       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:51:12.648028       1 main.go:301] handling current node
	I0317 12:51:22.646433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:51:22.646495       1 main.go:301] handling current node
	I0317 12:51:32.645029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:51:32.645080       1 main.go:301] handling current node
	I0317 12:51:42.645976       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:51:42.646013       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] <==
	W0317 12:42:47.051524       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0317 12:42:47.365563       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0317 12:42:47.757473       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0317 12:43:03.635813       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58138: use of closed network connection
	E0317 12:43:03.818342       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58162: use of closed network connection
	I0317 12:43:13.586843       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.198.122"}
	I0317 12:43:34.949156       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0317 12:43:41.483627       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0317 12:43:41.678398       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.52.27"}
	I0317 12:43:41.692581       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0317 12:43:42.711288       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0317 12:43:50.250114       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0317 12:44:15.694314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.694381       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.708477       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.708564       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.709759       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.709811       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.745127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.745201       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.767186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.767239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0317 12:44:16.709982       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0317 12:44:16.767220       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0317 12:44:16.945072       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] <==
	E0317 12:51:26.814715       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:51:30.674954       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:51:30.675927       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="batch.volcano.sh/v1alpha1, Resource=jobs"
	W0317 12:51:30.676892       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:51:30.676928       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:51:32.732212       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:51:32.733287       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=podgroups"
	W0317 12:51:32.734254       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:51:32.734293       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:51:35.187648       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:51:35.188851       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0317 12:51:35.189989       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:51:35.190024       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:51:41.497888       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:51:41.498948       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="bus.volcano.sh/v1alpha1, Resource=commands"
	W0317 12:51:41.499865       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:51:41.499902       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:51:42.789693       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:51:42.790687       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobflows"
	W0317 12:51:42.791520       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:51:42.791556       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:51:43.326641       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:51:43.327716       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobtemplates"
	W0317 12:51:43.329348       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:51:43.329398       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] <==
	I0317 12:39:33.459942       1 server_linux.go:66] "Using iptables proxy"
	I0317 12:39:34.065102       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0317 12:39:34.065210       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 12:39:34.462554       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 12:39:34.462648       1 server_linux.go:170] "Using iptables Proxier"
	I0317 12:39:34.548519       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 12:39:34.548961       1 server.go:497] "Version info" version="v1.32.2"
	I0317 12:39:34.548980       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 12:39:34.563177       1 config.go:199] "Starting service config controller"
	I0317 12:39:34.563231       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 12:39:34.563268       1 config.go:105] "Starting endpoint slice config controller"
	I0317 12:39:34.563274       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 12:39:34.564040       1 config.go:329] "Starting node config controller"
	I0317 12:39:34.564055       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 12:39:34.664095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 12:39:34.664167       1 shared_informer.go:320] Caches are synced for service config
	I0317 12:39:37.464457       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] <==
	W0317 12:39:23.464420       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 12:39:23.466171       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:23.466555       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 12:39:23.466702       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:23.466938       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 12:39:23.466981       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.268854       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 12:39:24.268916       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.269972       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 12:39:24.270018       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.278129       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:39:24.278189       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.344802       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 12:39:24.344855       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 12:39:24.390622       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 12:39:24.390672       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.425578       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 12:39:24.425627       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.573072       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 12:39:24.573157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.590139       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 12:39:24.590216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.681289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 12:39:24.681350       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0317 12:39:26.559813       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 12:49:31 addons-012219 kubelet[1625]: E0317 12:49:31.537773    1625 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Mar 17 12:49:31 addons-012219 kubelet[1625]: E0317 12:49:31.537886    1625 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hh4v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(a2751ab5-cd1c-44a3-a6ba-dba98b254a96): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 12:49:31 addons-012219 kubelet[1625]: E0317 12:49:31.539087    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:49:31 addons-012219 kubelet[1625]: E0317 12:49:31.988826    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:49:43 addons-012219 kubelet[1625]: E0317 12:49:43.988428    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:49:44 addons-012219 kubelet[1625]: E0317 12:49:44.988480    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:49:57 addons-012219 kubelet[1625]: E0317 12:49:57.989328    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:49:58 addons-012219 kubelet[1625]: E0317 12:49:58.989187    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:50:02 addons-012219 kubelet[1625]: I0317 12:50:02.987614    1625 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Mar 17 12:50:11 addons-012219 kubelet[1625]: E0317 12:50:11.989220    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:50:13 addons-012219 kubelet[1625]: E0317 12:50:13.988913    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:50:25 addons-012219 kubelet[1625]: E0317 12:50:25.989027    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:50:28 addons-012219 kubelet[1625]: E0317 12:50:28.988629    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:50:37 addons-012219 kubelet[1625]: E0317 12:50:37.990730    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:50:42 addons-012219 kubelet[1625]: E0317 12:50:42.988637    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:50:50 addons-012219 kubelet[1625]: E0317 12:50:50.989210    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:50:53 addons-012219 kubelet[1625]: E0317 12:50:53.989331    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:51:02 addons-012219 kubelet[1625]: E0317 12:51:02.988736    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:51:07 addons-012219 kubelet[1625]: E0317 12:51:07.988655    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:51:15 addons-012219 kubelet[1625]: E0317 12:51:15.989331    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:51:22 addons-012219 kubelet[1625]: E0317 12:51:22.988724    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:51:27 addons-012219 kubelet[1625]: I0317 12:51:27.987301    1625 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Mar 17 12:51:27 addons-012219 kubelet[1625]: E0317 12:51:27.988375    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:51:37 addons-012219 kubelet[1625]: E0317 12:51:37.989131    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:51:42 addons-012219 kubelet[1625]: E0317 12:51:42.988399    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	
	
	==> storage-provisioner [7e31db05a70a8aa2cb0f75b72d342f7e671f1ea0e1be6634ce27012647e92af9] <==
	I0317 12:39:38.160831       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0317 12:39:38.264352       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0317 12:39:38.267238       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0317 12:39:38.360209       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0317 12:39:38.360490       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-012219_f580fdba-4164-417d-99e0-bbfdff8b9108!
	I0317 12:39:38.361811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e6be1f1-63bf-4a91-a4d7-e3d46b4cb84d", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-012219_f580fdba-4164-417d-99e0-bbfdff8b9108 became leader
	I0317 12:39:38.465181       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-012219_f580fdba-4164-417d-99e0-bbfdff8b9108!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012219 -n addons-012219
helpers_test.go:261: (dbg) Run:  kubectl --context addons-012219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-012219 describe pod nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-012219 describe pod nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7: exit status 1 (83.499721ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-012219/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 12:43:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:  10.244.0.34
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hh4v9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hh4v9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  8m3s                    default-scheduler  Successfully assigned default/nginx to addons-012219
	  Normal   Pulling    5m2s (x5 over 8m2s)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m59s (x5 over 7m59s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m59s (x5 over 7m59s)   kubelet            Error: ErrImagePull
	  Warning  Failed     2m59s (x19 over 7m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m31s (x21 over 7m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-012219/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 12:43:25 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nft6x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-nft6x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  8m19s                   default-scheduler  Successfully assigned default/test-local-path to addons-012219
	  Warning  Failed     6m49s (x4 over 8m15s)   kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m26s (x5 over 8m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m26s                   kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m11s (x19 over 8m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m56s (x20 over 8m15s)  kubelet            Back-off pulling image "busybox:stable"
	  Normal   Pulling    2m41s (x6 over 8m19s)   kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l6q8w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t76b7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-012219 describe pod nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable ingress --alsologtostderr -v=1: (7.727396603s)
--- FAIL: TestAddons/parallel/Ingress (491.87s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (232.29s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-012219 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-012219 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [288fd0b6-8224-4d26-9aa9-20812cfdeca9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:901: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:901: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012219 -n addons-012219
addons_test.go:901: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-03-17 12:46:25.422281187 +0000 UTC m=+489.674130813
addons_test.go:901: (dbg) Run:  kubectl --context addons-012219 describe po test-local-path -n default
addons_test.go:901: (dbg) kubectl --context addons-012219 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-012219/192.168.49.2
Start Time:       Mon, 17 Mar 2025 12:43:25 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nft6x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-nft6x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/test-local-path to addons-012219
Warning  Failed     90s (x4 over 2m56s)  kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    25s (x9 over 2m56s)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     25s (x9 over 2m56s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    10s (x5 over 3m)     kubelet            Pulling image "busybox:stable"
Warning  Failed     7s (x5 over 2m56s)   kubelet            Error: ErrImagePull
Warning  Failed     7s                   kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
addons_test.go:901: (dbg) Run:  kubectl --context addons-012219 logs test-local-path -n default
addons_test.go:901: (dbg) Non-zero exit: kubectl --context addons-012219 logs test-local-path -n default: exit status 1 (77.691981ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:901: kubectl --context addons-012219 logs test-local-path -n default: exit status 1
addons_test.go:902: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-012219
helpers_test.go:235: (dbg) docker inspect addons-012219:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf",
	        "Created": "2025-03-17T12:39:11.6117619Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 455652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T12:39:11.648441729Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/hostname",
	        "HostsPath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/hosts",
	        "LogPath": "/var/lib/docker/containers/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf/8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf-json.log",
	        "Name": "/addons-012219",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-012219:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-012219",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8197043953b271260b844f01368eb6294459dd32030fca676a89e7c55b3b7baf",
	                "LowerDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58-init/diff:/var/lib/docker/overlay2/0d1b72eeaeef000e911d7896b151fb0d0a984c18eeb180d19223ea8ba67fdac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a74098a93f2345c9e4264de07f8a2e26b053757299012a821a0e2ec221e9ec58/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-012219",
	                "Source": "/var/lib/docker/volumes/addons-012219/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-012219",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-012219",
	                "name.minikube.sigs.k8s.io": "addons-012219",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "452ad4fa346bbc598add717085bed619e052b873df7af970d51fdbc4e83feeb5",
	            "SandboxKey": "/var/run/docker/netns/452ad4fa346b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-012219": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:86:92:3e:af:06",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d3969b4da548f201032412c0cc3078db46294c18bc50d2dd5fac1526b374ada7",
	                    "EndpointID": "0f281088beec74782f3e18095976832a627c3e17e266ddc1de94d77add036cd0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-012219",
	                        "8197043953b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-012219 -n addons-012219
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 logs -n 25: (1.300817195s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| delete  | -p download-only-498596              | download-only-498596   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| delete  | -p download-only-960465              | download-only-960465   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| delete  | -p download-only-498596              | download-only-498596   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| start   | --download-only -p                   | download-docker-513231 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | download-docker-513231               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-513231            | download-docker-513231 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| start   | --download-only -p                   | binary-mirror-312807   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | binary-mirror-312807                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45577               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-312807              | binary-mirror-312807   | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| addons  | disable dashboard -p                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | addons-012219                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | addons-012219                        |                        |         |         |                     |                     |
	| start   | -p addons-012219 --wait=true         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:42 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:42 UTC | 17 Mar 25 12:42 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | -p addons-012219                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-012219 ip                     | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable cloud-spanner                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons disable         | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-012219 addons                 | addons-012219          | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:38:48
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:38:48.035665  455052 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:38:48.036294  455052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:48.036342  455052 out.go:358] Setting ErrFile to fd 2...
	I0317 12:38:48.036350  455052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:48.036801  455052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 12:38:48.037791  455052 out.go:352] Setting JSON to false
	I0317 12:38:48.038760  455052 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8468,"bootTime":1742206660,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:38:48.038904  455052 start.go:139] virtualization: kvm guest
	I0317 12:38:48.040562  455052 out.go:177] * [addons-012219] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:38:48.041822  455052 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 12:38:48.041817  455052 notify.go:220] Checking for updates...
	I0317 12:38:48.043176  455052 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:38:48.044454  455052 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:38:48.045722  455052 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 12:38:48.046826  455052 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 12:38:48.048090  455052 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:38:48.049578  455052 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:38:48.074854  455052 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 12:38:48.074957  455052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:48.127449  455052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 12:38:48.118130941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:48.127557  455052 docker.go:318] overlay module found
	I0317 12:38:48.129183  455052 out.go:177] * Using the docker driver based on user configuration
	I0317 12:38:48.130332  455052 start.go:297] selected driver: docker
	I0317 12:38:48.130353  455052 start.go:901] validating driver "docker" against <nil>
	I0317 12:38:48.130368  455052 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:38:48.131173  455052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:48.182534  455052 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 12:38:48.173184645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:48.182748  455052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:38:48.182959  455052 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:38:48.184638  455052 out.go:177] * Using Docker driver with root privileges
	I0317 12:38:48.185738  455052 cni.go:84] Creating CNI manager for ""
	I0317 12:38:48.185832  455052 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:38:48.185848  455052 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 12:38:48.185934  455052 start.go:340] cluster config:
	{Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:38:48.187222  455052 out.go:177] * Starting "addons-012219" primary control-plane node in "addons-012219" cluster
	I0317 12:38:48.188445  455052 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 12:38:48.189727  455052 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 12:38:48.190812  455052 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:38:48.190861  455052 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 12:38:48.190875  455052 cache.go:56] Caching tarball of preloaded images
	I0317 12:38:48.190891  455052 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 12:38:48.191017  455052 preload.go:172] Found /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 12:38:48.191033  455052 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 12:38:48.191471  455052 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/config.json ...
	I0317 12:38:48.191502  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/config.json: {Name:mk5ae75b173bff0b4f3b12df1725ab9cf5ff3206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:38:48.208531  455052 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 12:38:48.208738  455052 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 12:38:48.208767  455052 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0317 12:38:48.208775  455052 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0317 12:38:48.208790  455052 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0317 12:38:48.208801  455052 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from local cache
	I0317 12:39:01.030051  455052 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 from cached tarball
	I0317 12:39:01.030110  455052 cache.go:230] Successfully downloaded all kic artifacts
	I0317 12:39:01.030185  455052 start.go:360] acquireMachinesLock for addons-012219: {Name:mk4f9029816aabb75cfe9bdbdbb316adafd6cfa3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:39:01.030314  455052 start.go:364] duration metric: took 105.313µs to acquireMachinesLock for "addons-012219"
	I0317 12:39:01.030358  455052 start.go:93] Provisioning new machine with config: &{Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:39:01.030431  455052 start.go:125] createHost starting for "" (driver="docker")
	I0317 12:39:01.032554  455052 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0317 12:39:01.032866  455052 start.go:159] libmachine.API.Create for "addons-012219" (driver="docker")
	I0317 12:39:01.032909  455052 client.go:168] LocalClient.Create starting
	I0317 12:39:01.033127  455052 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem
	I0317 12:39:01.250466  455052 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem
	I0317 12:39:01.750594  455052 cli_runner.go:164] Run: docker network inspect addons-012219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 12:39:01.770703  455052 cli_runner.go:211] docker network inspect addons-012219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 12:39:01.770792  455052 network_create.go:284] running [docker network inspect addons-012219] to gather additional debugging logs...
	I0317 12:39:01.770810  455052 cli_runner.go:164] Run: docker network inspect addons-012219
	W0317 12:39:01.791389  455052 cli_runner.go:211] docker network inspect addons-012219 returned with exit code 1
	I0317 12:39:01.791428  455052 network_create.go:287] error running [docker network inspect addons-012219]: docker network inspect addons-012219: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-012219 not found
	I0317 12:39:01.791459  455052 network_create.go:289] output of [docker network inspect addons-012219]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-012219 not found
	
	** /stderr **
	I0317 12:39:01.791608  455052 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:39:01.812027  455052 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0020911c0}
	I0317 12:39:01.812090  455052 network_create.go:124] attempt to create docker network addons-012219 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0317 12:39:01.812146  455052 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-012219 addons-012219
	I0317 12:39:01.870671  455052 network_create.go:108] docker network addons-012219 192.168.49.0/24 created
	I0317 12:39:01.870727  455052 kic.go:121] calculated static IP "192.168.49.2" for the "addons-012219" container
	I0317 12:39:01.870809  455052 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 12:39:01.888169  455052 cli_runner.go:164] Run: docker volume create addons-012219 --label name.minikube.sigs.k8s.io=addons-012219 --label created_by.minikube.sigs.k8s.io=true
	I0317 12:39:01.907968  455052 oci.go:103] Successfully created a docker volume addons-012219
	I0317 12:39:01.908179  455052 cli_runner.go:164] Run: docker run --rm --name addons-012219-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-012219 --entrypoint /usr/bin/test -v addons-012219:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 12:39:06.872289  455052 cli_runner.go:217] Completed: docker run --rm --name addons-012219-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-012219 --entrypoint /usr/bin/test -v addons-012219:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib: (4.964041866s)
	I0317 12:39:06.872360  455052 oci.go:107] Successfully prepared a docker volume addons-012219
	I0317 12:39:06.872407  455052 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:39:06.872435  455052 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 12:39:06.872519  455052 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-012219:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 12:39:11.538297  455052 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-012219:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.665711957s)
	I0317 12:39:11.538334  455052 kic.go:203] duration metric: took 4.665893918s to extract preloaded images to volume ...
	W0317 12:39:11.538500  455052 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 12:39:11.538611  455052 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 12:39:11.593645  455052 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-012219 --name addons-012219 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-012219 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-012219 --network addons-012219 --ip 192.168.49.2 --volume addons-012219:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 12:39:11.877382  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Running}}
	I0317 12:39:11.896804  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:11.916694  455052 cli_runner.go:164] Run: docker exec addons-012219 stat /var/lib/dpkg/alternatives/iptables
	I0317 12:39:11.961992  455052 oci.go:144] the created container "addons-012219" has a running status.
	I0317 12:39:11.962040  455052 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa...
	I0317 12:39:12.496926  455052 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 12:39:12.520345  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:12.539423  455052 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 12:39:12.539446  455052 kic_runner.go:114] Args: [docker exec --privileged addons-012219 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 12:39:12.591629  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:12.613026  455052 machine.go:93] provisionDockerMachine start ...
	I0317 12:39:12.613173  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:12.632700  455052 main.go:141] libmachine: Using SSH client type: native
	I0317 12:39:12.632985  455052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0317 12:39:12.633003  455052 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 12:39:12.768094  455052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012219
	
	I0317 12:39:12.768130  455052 ubuntu.go:169] provisioning hostname "addons-012219"
	I0317 12:39:12.768210  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:12.787598  455052 main.go:141] libmachine: Using SSH client type: native
	I0317 12:39:12.787821  455052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0317 12:39:12.787838  455052 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-012219 && echo "addons-012219" | sudo tee /etc/hostname
	I0317 12:39:12.936726  455052 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012219
	
	I0317 12:39:12.936809  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:12.954937  455052 main.go:141] libmachine: Using SSH client type: native
	I0317 12:39:12.955163  455052 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0317 12:39:12.955181  455052 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-012219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-012219/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-012219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:39:13.093093  455052 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:39:13.093132  455052 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20539-446828/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-446828/.minikube}
	I0317 12:39:13.093178  455052 ubuntu.go:177] setting up certificates
	I0317 12:39:13.093191  455052 provision.go:84] configureAuth start
	I0317 12:39:13.093250  455052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-012219
	I0317 12:39:13.111440  455052 provision.go:143] copyHostCerts
	I0317 12:39:13.111541  455052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem (1675 bytes)
	I0317 12:39:13.111698  455052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem (1082 bytes)
	I0317 12:39:13.111825  455052 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem (1123 bytes)
	I0317 12:39:13.111941  455052 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem org=jenkins.addons-012219 san=[127.0.0.1 192.168.49.2 addons-012219 localhost minikube]
	I0317 12:39:13.162824  455052 provision.go:177] copyRemoteCerts
	I0317 12:39:13.162892  455052 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:39:13.162936  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.181586  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.281817  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:39:13.308715  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0317 12:39:13.335335  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 12:39:13.361942  455052 provision.go:87] duration metric: took 268.734518ms to configureAuth
	I0317 12:39:13.361975  455052 ubuntu.go:193] setting minikube options for container-runtime
	I0317 12:39:13.362170  455052 config.go:182] Loaded profile config "addons-012219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:39:13.362187  455052 machine.go:96] duration metric: took 749.138253ms to provisionDockerMachine
	I0317 12:39:13.362195  455052 client.go:171] duration metric: took 12.329276946s to LocalClient.Create
	I0317 12:39:13.362217  455052 start.go:167] duration metric: took 12.329355429s to libmachine.API.Create "addons-012219"
	I0317 12:39:13.362224  455052 start.go:293] postStartSetup for "addons-012219" (driver="docker")
	I0317 12:39:13.362233  455052 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:39:13.362278  455052 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:39:13.362314  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.381057  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.482200  455052 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:39:13.485959  455052 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 12:39:13.485992  455052 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 12:39:13.486003  455052 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 12:39:13.486012  455052 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 12:39:13.486025  455052 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/addons for local assets ...
	I0317 12:39:13.486108  455052 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/files for local assets ...
	I0317 12:39:13.486140  455052 start.go:296] duration metric: took 123.908916ms for postStartSetup
	I0317 12:39:13.486452  455052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-012219
	I0317 12:39:13.505708  455052 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/config.json ...
	I0317 12:39:13.506012  455052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:39:13.506061  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.524830  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.618002  455052 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 12:39:13.622926  455052 start.go:128] duration metric: took 12.592476216s to createHost
	I0317 12:39:13.622956  455052 start.go:83] releasing machines lock for "addons-012219", held for 12.59262781s
	I0317 12:39:13.623035  455052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-012219
	I0317 12:39:13.642865  455052 ssh_runner.go:195] Run: cat /version.json
	I0317 12:39:13.642925  455052 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 12:39:13.643002  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.642931  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:13.663466  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.663820  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:13.756498  455052 ssh_runner.go:195] Run: systemctl --version
	I0317 12:39:13.835206  455052 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 12:39:13.840449  455052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 12:39:13.867139  455052 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 12:39:13.867227  455052 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 12:39:13.896974  455052 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 12:39:13.897004  455052 start.go:495] detecting cgroup driver to use...
	I0317 12:39:13.897060  455052 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 12:39:13.897129  455052 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 12:39:13.909957  455052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:39:13.921484  455052 docker.go:217] disabling cri-docker service (if available) ...
	I0317 12:39:13.921564  455052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 12:39:13.935505  455052 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 12:39:13.950483  455052 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 12:39:14.027108  455052 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 12:39:14.108664  455052 docker.go:233] disabling docker service ...
	I0317 12:39:14.108739  455052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 12:39:14.128684  455052 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 12:39:14.140295  455052 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 12:39:14.223859  455052 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 12:39:14.311339  455052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 12:39:14.323210  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:39:14.340391  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 12:39:14.351125  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 12:39:14.362117  455052 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 12:39:14.362181  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 12:39:14.372457  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:39:14.383021  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 12:39:14.393284  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:39:14.404023  455052 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:39:14.414135  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 12:39:14.424920  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 12:39:14.435840  455052 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 12:39:14.447169  455052 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:39:14.455892  455052 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:39:14.464970  455052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:39:14.538457  455052 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 12:39:14.648203  455052 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 12:39:14.648284  455052 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 12:39:14.652351  455052 start.go:563] Will wait 60s for crictl version
	I0317 12:39:14.652423  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:39:14.655987  455052 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 12:39:14.692655  455052 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 12:39:14.692751  455052 ssh_runner.go:195] Run: containerd --version
	I0317 12:39:14.719458  455052 ssh_runner.go:195] Run: containerd --version
	I0317 12:39:14.747914  455052 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 12:39:14.749467  455052 cli_runner.go:164] Run: docker network inspect addons-012219 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:39:14.768502  455052 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0317 12:39:14.772651  455052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:39:14.784531  455052 kubeadm.go:883] updating cluster {Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 12:39:14.784658  455052 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:39:14.784705  455052 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:39:14.821811  455052 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:39:14.821838  455052 containerd.go:534] Images already preloaded, skipping extraction
	I0317 12:39:14.821903  455052 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:39:14.856662  455052 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:39:14.856689  455052 cache_images.go:84] Images are preloaded, skipping loading
	I0317 12:39:14.856698  455052 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 containerd true true} ...
	I0317 12:39:14.856794  455052 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-012219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 12:39:14.856848  455052 ssh_runner.go:195] Run: sudo crictl info
	I0317 12:39:14.892646  455052 cni.go:84] Creating CNI manager for ""
	I0317 12:39:14.892679  455052 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:39:14.892696  455052 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 12:39:14.892720  455052 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-012219 NodeName:addons-012219 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 12:39:14.892840  455052 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-012219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 12:39:14.892907  455052 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 12:39:14.902144  455052 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 12:39:14.902217  455052 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 12:39:14.911539  455052 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0317 12:39:14.931119  455052 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 12:39:14.949717  455052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0317 12:39:14.968581  455052 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0317 12:39:14.972599  455052 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:39:14.985705  455052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:39:15.067453  455052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:39:15.082243  455052 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219 for IP: 192.168.49.2
	I0317 12:39:15.082266  455052 certs.go:194] generating shared ca certs ...
	I0317 12:39:15.082283  455052 certs.go:226] acquiring lock for ca certs: {Name:mk0dd75eca163be7a048e137f4b2d32cf3ae35d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.082507  455052 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key
	I0317 12:39:15.215977  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt ...
	I0317 12:39:15.216013  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt: {Name:mk6c5810acd75cb9b3a95204aeb4923648134fec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.216200  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key ...
	I0317 12:39:15.216211  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key: {Name:mk42dd3bc2bef3996c8d9aca4b91a21a3483ce7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.216284  455052 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key
	I0317 12:39:15.438662  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt ...
	I0317 12:39:15.438706  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt: {Name:mk85c7404144a9503537fe74ab1fafce6d5efe93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.438914  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key ...
	I0317 12:39:15.438928  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key: {Name:mkb6f268b434dbbb859dd2b57fc506ee093f4f5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:15.439006  455052 certs.go:256] generating profile certs ...
	I0317 12:39:15.439068  455052 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.key
	I0317 12:39:15.439083  455052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt with IP's: []
	I0317 12:39:16.321284  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt ...
	I0317 12:39:16.321323  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: {Name:mk1eac80c5f0c5edd4268bd4c7f32a2877239abf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.321507  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.key ...
	I0317 12:39:16.321528  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.key: {Name:mke0c492e654f28c7c87390951b63149bdb94f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.321599  455052 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683
	I0317 12:39:16.321617  455052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0317 12:39:16.548474  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683 ...
	I0317 12:39:16.548517  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683: {Name:mkd89694ae1d0c6fe037f6c581e4c6ae7215f3a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.548694  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683 ...
	I0317 12:39:16.548708  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683: {Name:mk5c4663ccf0ea3254d6e2b196b6a7b99f9d07d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:16.548785  455052 certs.go:381] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt.e4ed0683 -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt
	I0317 12:39:16.548860  455052 certs.go:385] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key.e4ed0683 -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key
	I0317 12:39:16.548905  455052 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key
	I0317 12:39:16.548921  455052 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt with IP's: []
	I0317 12:39:17.036293  455052 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt ...
	I0317 12:39:17.036358  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt: {Name:mk30436cd124ef55c65e6fe2ce66a0585594f30b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:17.036538  455052 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key ...
	I0317 12:39:17.036552  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key: {Name:mk6febb43cba57732591cbb93ae48f0cb1241b3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:17.036737  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 12:39:17.036779  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem (1082 bytes)
	I0317 12:39:17.036800  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem (1123 bytes)
	I0317 12:39:17.036819  455052 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem (1675 bytes)
	I0317 12:39:17.037536  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 12:39:17.063150  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 12:39:17.089188  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 12:39:17.114585  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 12:39:17.140004  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0317 12:39:17.165950  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 12:39:17.191067  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 12:39:17.215893  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 12:39:17.241779  455052 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 12:39:17.267836  455052 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 12:39:17.286839  455052 ssh_runner.go:195] Run: openssl version
	I0317 12:39:17.293157  455052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 12:39:17.303415  455052 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:39:17.307327  455052 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:39 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:39:17.307401  455052 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:39:17.314186  455052 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 12:39:17.324679  455052 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 12:39:17.328279  455052 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:39:17.328363  455052 kubeadm.go:392] StartCluster: {Name:addons-012219 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:39:17.328462  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 12:39:17.328541  455052 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 12:39:17.367264  455052 cri.go:89] found id: ""
	I0317 12:39:17.367360  455052 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 12:39:17.376658  455052 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 12:39:17.386258  455052 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 12:39:17.386328  455052 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 12:39:17.396059  455052 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 12:39:17.396096  455052 kubeadm.go:157] found existing configuration files:
	
	I0317 12:39:17.396152  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 12:39:17.406310  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 12:39:17.406366  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 12:39:17.415771  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 12:39:17.426078  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 12:39:17.426213  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 12:39:17.436888  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 12:39:17.449038  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 12:39:17.449119  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 12:39:17.458740  455052 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 12:39:17.468085  455052 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 12:39:17.468161  455052 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 12:39:17.477138  455052 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 12:39:17.536415  455052 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 12:39:17.536762  455052 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 12:39:17.594526  455052 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:39:26.716704  455052 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 12:39:26.716762  455052 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 12:39:26.716880  455052 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 12:39:26.716982  455052 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 12:39:26.717040  455052 kubeadm.go:310] OS: Linux
	I0317 12:39:26.717098  455052 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 12:39:26.717230  455052 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 12:39:26.717289  455052 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 12:39:26.717340  455052 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 12:39:26.717425  455052 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 12:39:26.717511  455052 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 12:39:26.717568  455052 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 12:39:26.717612  455052 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 12:39:26.717656  455052 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 12:39:26.717725  455052 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 12:39:26.717806  455052 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 12:39:26.717918  455052 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 12:39:26.717970  455052 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 12:39:26.719819  455052 out.go:235]   - Generating certificates and keys ...
	I0317 12:39:26.719928  455052 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 12:39:26.719987  455052 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 12:39:26.720057  455052 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 12:39:26.720122  455052 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 12:39:26.720180  455052 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 12:39:26.720234  455052 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 12:39:26.720332  455052 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 12:39:26.720471  455052 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-012219 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:39:26.720520  455052 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 12:39:26.720631  455052 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-012219 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:39:26.720688  455052 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 12:39:26.720755  455052 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 12:39:26.720796  455052 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 12:39:26.720846  455052 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 12:39:26.720892  455052 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 12:39:26.720952  455052 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 12:39:26.721012  455052 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 12:39:26.721067  455052 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 12:39:26.721121  455052 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 12:39:26.721229  455052 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 12:39:26.721313  455052 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 12:39:26.722821  455052 out.go:235]   - Booting up control plane ...
	I0317 12:39:26.722963  455052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 12:39:26.723094  455052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 12:39:26.723204  455052 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 12:39:26.723394  455052 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:39:26.723522  455052 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:39:26.723578  455052 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 12:39:26.723736  455052 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 12:39:26.723882  455052 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:39:26.723962  455052 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.075708ms
	I0317 12:39:26.724033  455052 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 12:39:26.724123  455052 kubeadm.go:310] [api-check] The API server is healthy after 5.001229447s
	I0317 12:39:26.724272  455052 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 12:39:26.724491  455052 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 12:39:26.724595  455052 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 12:39:26.724789  455052 kubeadm.go:310] [mark-control-plane] Marking the node addons-012219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 12:39:26.724861  455052 kubeadm.go:310] [bootstrap-token] Using token: bcu5f8.5eu7wklvfllmqleo
	I0317 12:39:26.726327  455052 out.go:235]   - Configuring RBAC rules ...
	I0317 12:39:26.726478  455052 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 12:39:26.726566  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 12:39:26.726720  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 12:39:26.726833  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 12:39:26.726967  455052 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 12:39:26.727121  455052 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 12:39:26.727234  455052 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 12:39:26.727301  455052 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 12:39:26.727347  455052 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 12:39:26.727358  455052 kubeadm.go:310] 
	I0317 12:39:26.727416  455052 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 12:39:26.727427  455052 kubeadm.go:310] 
	I0317 12:39:26.727494  455052 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 12:39:26.727501  455052 kubeadm.go:310] 
	I0317 12:39:26.727533  455052 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 12:39:26.727612  455052 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 12:39:26.727658  455052 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 12:39:26.727667  455052 kubeadm.go:310] 
	I0317 12:39:26.727717  455052 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 12:39:26.727722  455052 kubeadm.go:310] 
	I0317 12:39:26.727762  455052 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 12:39:26.727769  455052 kubeadm.go:310] 
	I0317 12:39:26.727833  455052 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 12:39:26.728012  455052 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 12:39:26.728122  455052 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 12:39:26.728143  455052 kubeadm.go:310] 
	I0317 12:39:26.728244  455052 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 12:39:26.728372  455052 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 12:39:26.728388  455052 kubeadm.go:310] 
	I0317 12:39:26.728496  455052 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bcu5f8.5eu7wklvfllmqleo \
	I0317 12:39:26.728637  455052 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 \
	I0317 12:39:26.728676  455052 kubeadm.go:310] 	--control-plane 
	I0317 12:39:26.728685  455052 kubeadm.go:310] 
	I0317 12:39:26.728798  455052 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 12:39:26.728805  455052 kubeadm.go:310] 
	I0317 12:39:26.728933  455052 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bcu5f8.5eu7wklvfllmqleo \
	I0317 12:39:26.729128  455052 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 
	I0317 12:39:26.729143  455052 cni.go:84] Creating CNI manager for ""
	I0317 12:39:26.729150  455052 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:39:26.730819  455052 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 12:39:26.732461  455052 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 12:39:26.737191  455052 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 12:39:26.737228  455052 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 12:39:26.757414  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 12:39:26.978716  455052 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 12:39:26.978832  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:26.978900  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-012219 minikube.k8s.io/updated_at=2025_03_17T12_39_26_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=addons-012219 minikube.k8s.io/primary=true
	I0317 12:39:26.986869  455052 ops.go:34] apiserver oom_adj: -16
	I0317 12:39:27.069756  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:27.569849  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:28.070194  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:28.570825  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:29.069844  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:29.569896  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:30.070141  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:30.570759  455052 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:39:30.643541  455052 kubeadm.go:1113] duration metric: took 3.66480023s to wait for elevateKubeSystemPrivileges
	I0317 12:39:30.643580  455052 kubeadm.go:394] duration metric: took 13.315224606s to StartCluster
	I0317 12:39:30.643601  455052 settings.go:142] acquiring lock: {Name:mk72030e2b6f80365da0b928b8b3c5c72d9da724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:30.643729  455052 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:39:30.644109  455052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/kubeconfig: {Name:mk0cd04f754d83d5d928c90de569ec9144a7d4e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:39:30.644299  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 12:39:30.644348  455052 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:39:30.644427  455052 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0317 12:39:30.644597  455052 config.go:182] Loaded profile config "addons-012219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:39:30.644619  455052 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-012219"
	I0317 12:39:30.644633  455052 addons.go:69] Setting yakd=true in profile "addons-012219"
	I0317 12:39:30.644638  455052 addons.go:69] Setting metrics-server=true in profile "addons-012219"
	I0317 12:39:30.644650  455052 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-012219"
	I0317 12:39:30.644666  455052 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-012219"
	I0317 12:39:30.644675  455052 addons.go:238] Setting addon metrics-server=true in "addons-012219"
	I0317 12:39:30.644693  455052 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-012219"
	I0317 12:39:30.644703  455052 addons.go:69] Setting volcano=true in profile "addons-012219"
	I0317 12:39:30.644708  455052 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-012219"
	I0317 12:39:30.644715  455052 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-012219"
	I0317 12:39:30.644718  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644723  455052 addons.go:69] Setting volumesnapshots=true in profile "addons-012219"
	I0317 12:39:30.644744  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644751  455052 addons.go:238] Setting addon volumesnapshots=true in "addons-012219"
	I0317 12:39:30.644780  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644796  455052 addons.go:69] Setting registry=true in profile "addons-012219"
	I0317 12:39:30.644813  455052 addons.go:238] Setting addon registry=true in "addons-012219"
	I0317 12:39:30.644837  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645011  455052 addons.go:69] Setting storage-provisioner=true in profile "addons-012219"
	I0317 12:39:30.645041  455052 addons.go:238] Setting addon storage-provisioner=true in "addons-012219"
	I0317 12:39:30.645067  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645121  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645291  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645343  455052 addons.go:69] Setting inspektor-gadget=true in profile "addons-012219"
	I0317 12:39:30.644780  455052 addons.go:69] Setting cloud-spanner=true in profile "addons-012219"
	I0317 12:39:30.645385  455052 addons.go:238] Setting addon inspektor-gadget=true in "addons-012219"
	I0317 12:39:30.645421  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644694  455052 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-012219"
	I0317 12:39:30.645467  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645531  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.644676  455052 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-012219"
	I0317 12:39:30.645706  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.644713  455052 addons.go:238] Setting addon volcano=true in "addons-012219"
	I0317 12:39:30.645967  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645977  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645350  455052 addons.go:69] Setting ingress-dns=true in profile "addons-012219"
	I0317 12:39:30.646063  455052 addons.go:238] Setting addon ingress-dns=true in "addons-012219"
	I0317 12:39:30.646099  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.646143  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.646440  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.646632  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645388  455052 addons.go:238] Setting addon cloud-spanner=true in "addons-012219"
	I0317 12:39:30.647227  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.645298  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.648194  455052 out.go:177] * Verifying Kubernetes components...
	I0317 12:39:30.648364  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.645317  455052 addons.go:69] Setting ingress=true in profile "addons-012219"
	I0317 12:39:30.649855  455052 addons.go:238] Setting addon ingress=true in "addons-012219"
	I0317 12:39:30.645326  455052 addons.go:69] Setting default-storageclass=true in profile "addons-012219"
	I0317 12:39:30.645339  455052 addons.go:69] Setting gcp-auth=true in profile "addons-012219"
	I0317 12:39:30.645355  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.644656  455052 addons.go:238] Setting addon yakd=true in "addons-012219"
	I0317 12:39:30.645937  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.649767  455052 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:39:30.649969  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.650227  455052 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-012219"
	I0317 12:39:30.645325  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.650682  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.650903  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.656389  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.656877  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.657273  455052 mustload.go:65] Loading cluster: addons-012219
	I0317 12:39:30.673520  455052 out.go:177]   - Using image docker.io/registry:2.8.3
	I0317 12:39:30.674781  455052 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0317 12:39:30.676894  455052 config.go:182] Loaded profile config "addons-012219": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:39:30.678394  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.680292  455052 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0317 12:39:30.680336  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0317 12:39:30.680404  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.690141  455052 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0317 12:39:30.694421  455052 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0317 12:39:30.694452  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0317 12:39:30.694532  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.699029  455052 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
	I0317 12:39:30.700369  455052 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
	I0317 12:39:30.701547  455052 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
	I0317 12:39:30.704584  455052 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0317 12:39:30.704617  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
	I0317 12:39:30.704799  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.732527  455052 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0317 12:39:30.734198  455052 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0317 12:39:30.734222  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0317 12:39:30.734288  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.734764  455052 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0317 12:39:30.736165  455052 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0317 12:39:30.736197  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0317 12:39:30.736211  455052 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0317 12:39:30.736280  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.737506  455052 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0317 12:39:30.737526  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0317 12:39:30.737582  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.737799  455052 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-012219"
	I0317 12:39:30.737854  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.738350  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.741937  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0317 12:39:30.743129  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0317 12:39:30.743158  455052 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0317 12:39:30.743230  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.749229  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0317 12:39:30.751779  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.757697  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0317 12:39:30.759304  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0317 12:39:30.759385  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:39:30.760535  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:39:30.760652  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0317 12:39:30.764350  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0317 12:39:30.764692  455052 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0317 12:39:30.764712  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0317 12:39:30.764783  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.766937  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0317 12:39:30.768032  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0317 12:39:30.769181  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0317 12:39:30.770260  455052 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0317 12:39:30.771185  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0317 12:39:30.771210  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0317 12:39:30.771282  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.774340  455052 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0317 12:39:30.776003  455052 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0317 12:39:30.776032  455052 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0317 12:39:30.776109  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.783240  455052 addons.go:238] Setting addon default-storageclass=true in "addons-012219"
	I0317 12:39:30.783295  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:30.783728  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:30.788545  455052 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0317 12:39:30.788572  455052 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 12:39:30.788629  455052 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0317 12:39:30.790038  455052 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0317 12:39:30.790060  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0317 12:39:30.790121  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.789063  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.790720  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0317 12:39:30.790737  455052 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0317 12:39:30.790793  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.791069  455052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:39:30.791082  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 12:39:30.791120  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.797195  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.798718  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.802862  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.806046  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.821704  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.826613  455052 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0317 12:39:30.827797  455052 out.go:177]   - Using image docker.io/busybox:stable
	I0317 12:39:30.827903  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.828417  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.829204  455052 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0317 12:39:30.829229  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0317 12:39:30.829288  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.829925  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.831815  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.833757  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.835190  455052 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 12:39:30.835212  455052 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 12:39:30.835267  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:30.846343  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	W0317 12:39:30.852133  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:30.852179  455052 retry.go:31] will retry after 165.673275ms: ssh: handshake failed: EOF
	I0317 12:39:30.861530  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.869345  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:30.870549  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	W0317 12:39:30.871334  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:30.871362  455052 retry.go:31] will retry after 367.240618ms: ssh: handshake failed: EOF
	W0317 12:39:30.872875  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:30.872906  455052 retry.go:31] will retry after 248.60274ms: ssh: handshake failed: EOF
	I0317 12:39:30.966162  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 12:39:30.966299  455052 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0317 12:39:31.019574  455052 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0317 12:39:31.019616  455052 retry.go:31] will retry after 457.982718ms: ssh: handshake failed: EOF
	I0317 12:39:31.257175  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0317 12:39:31.267972  455052 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0317 12:39:31.268051  455052 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0317 12:39:31.359257  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0317 12:39:31.359372  455052 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0317 12:39:31.365280  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:39:31.449821  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0317 12:39:31.449865  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0317 12:39:31.462540  455052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0317 12:39:31.462592  455052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0317 12:39:31.551215  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0317 12:39:31.560996  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0317 12:39:31.651199  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0317 12:39:31.656616  455052 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0317 12:39:31.656726  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0317 12:39:31.748151  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0317 12:39:31.748256  455052 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0317 12:39:31.749578  455052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0317 12:39:31.749609  455052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0317 12:39:31.751444  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0317 12:39:31.751522  455052 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0317 12:39:31.755408  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0317 12:39:31.762205  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 12:39:31.766344  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0317 12:39:31.851723  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0317 12:39:31.968271  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0317 12:39:32.060014  455052 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0317 12:39:32.060104  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0317 12:39:32.065631  455052 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0317 12:39:32.065768  455052 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0317 12:39:32.161052  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0317 12:39:32.161164  455052 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0317 12:39:32.169358  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0317 12:39:32.169448  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0317 12:39:32.365698  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0317 12:39:32.464479  455052 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0317 12:39:32.464516  455052 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0317 12:39:32.467193  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0317 12:39:32.467280  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0317 12:39:32.846142  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0317 12:39:32.955524  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0317 12:39:32.955555  455052 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0317 12:39:32.970714  455052 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0317 12:39:32.970743  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0317 12:39:33.062460  455052 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.096230372s)
	I0317 12:39:33.062510  455052 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0317 12:39:33.063931  455052 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.097605245s)
	I0317 12:39:33.064929  455052 node_ready.go:35] waiting up to 6m0s for node "addons-012219" to be "Ready" ...
	I0317 12:39:33.065239  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.807970086s)
	I0317 12:39:33.068002  455052 node_ready.go:49] node "addons-012219" has status "Ready":"True"
	I0317 12:39:33.068029  455052 node_ready.go:38] duration metric: took 3.064074ms for node "addons-012219" to be "Ready" ...
	I0317 12:39:33.068041  455052 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:39:33.152942  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0317 12:39:33.152978  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0317 12:39:33.160164  455052 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace to be "Ready" ...
	I0317 12:39:33.567628  455052 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-012219" context rescaled to 1 replicas
	I0317 12:39:33.754550  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0317 12:39:33.761198  455052 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:39:33.761298  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0317 12:39:33.852436  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0317 12:39:33.852536  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0317 12:39:34.254529  455052 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0317 12:39:34.254655  455052 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0317 12:39:34.462280  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:39:34.845500  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0317 12:39:34.845537  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0317 12:39:35.248845  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:35.362365  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0317 12:39:35.362400  455052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0317 12:39:35.846643  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.295318375s)
	I0317 12:39:35.846730  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.285697264s)
	I0317 12:39:35.846912  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.481506128s)
	I0317 12:39:35.858428  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0317 12:39:35.858464  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0317 12:39:36.647042  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0317 12:39:36.647081  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0317 12:39:37.065685  455052 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0317 12:39:37.065718  455052 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0317 12:39:37.466458  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0317 12:39:37.758106  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:37.762024  455052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0317 12:39:37.762225  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:37.784451  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:38.453145  455052 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0317 12:39:38.561607  455052 addons.go:238] Setting addon gcp-auth=true in "addons-012219"
	I0317 12:39:38.561781  455052 host.go:66] Checking if "addons-012219" exists ...
	I0317 12:39:38.562402  455052 cli_runner.go:164] Run: docker container inspect addons-012219 --format={{.State.Status}}
	I0317 12:39:38.583369  455052 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0317 12:39:38.583432  455052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-012219
	I0317 12:39:38.603362  455052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/addons-012219/id_rsa Username:docker}
	I0317 12:39:39.963042  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.311718597s)
	I0317 12:39:39.963103  455052 addons.go:479] Verifying addon ingress=true in "addons-012219"
	I0317 12:39:39.963146  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.200916336s)
	I0317 12:39:39.963106  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.207671739s)
	I0317 12:39:39.964972  455052 out.go:177] * Verifying ingress addon...
	I0317 12:39:39.967123  455052 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0317 12:39:39.974048  455052 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0317 12:39:39.974085  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:40.168255  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:40.471454  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:40.970837  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:41.549708  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:42.061728  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:42.169314  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:42.550695  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:42.553278  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.786881387s)
	I0317 12:39:42.553444  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.701613468s)
	I0317 12:39:42.553532  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.585225493s)
	I0317 12:39:42.553565  455052 addons.go:479] Verifying addon registry=true in "addons-012219"
	I0317 12:39:42.553630  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (10.187898413s)
	I0317 12:39:42.553712  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.707451979s)
	I0317 12:39:42.553739  455052 addons.go:479] Verifying addon metrics-server=true in "addons-012219"
	I0317 12:39:42.553800  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.79912533s)
	I0317 12:39:42.553972  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.091594119s)
	W0317 12:39:42.554015  455052 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0317 12:39:42.554040  455052 retry.go:31] will retry after 295.085511ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0317 12:39:42.555834  455052 out.go:177] * Verifying registry addon...
	I0317 12:39:42.555863  455052 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-012219 service yakd-dashboard -n yakd-dashboard
	
	I0317 12:39:42.557616  455052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0317 12:39:42.581691  455052 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0317 12:39:42.581718  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:42.850218  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0317 12:39:42.975974  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:43.075964  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:43.155821  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.689213369s)
	I0317 12:39:43.155936  455052 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-012219"
	I0317 12:39:43.155942  455052 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.572537291s)
	I0317 12:39:43.157617  455052 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0317 12:39:43.157777  455052 out.go:177] * Verifying csi-hostpath-driver addon...
	I0317 12:39:43.159814  455052 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0317 12:39:43.160582  455052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0317 12:39:43.160914  455052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0317 12:39:43.160937  455052 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0317 12:39:43.169759  455052 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0317 12:39:43.169793  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:43.261056  455052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0317 12:39:43.261091  455052 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0317 12:39:43.348254  455052 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0317 12:39:43.348283  455052 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0317 12:39:43.374483  455052 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0317 12:39:43.471359  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:43.572495  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:43.672384  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:43.971940  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:44.061084  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:44.165056  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:44.471394  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:44.649892  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:44.666193  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:44.751533  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:44.969390  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.119113252s)
	I0317 12:39:44.969492  455052 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.594968733s)
	I0317 12:39:44.971520  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:44.971590  455052 addons.go:479] Verifying addon gcp-auth=true in "addons-012219"
	I0317 12:39:44.973564  455052 out.go:177] * Verifying gcp-auth addon...
	I0317 12:39:44.976076  455052 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0317 12:39:44.978975  455052 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0317 12:39:45.072406  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:45.173295  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:45.470631  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:45.561686  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:45.665692  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:45.971717  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:46.072106  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:46.164021  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:46.471361  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:46.561277  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:46.664535  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:46.667048  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:46.971565  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:47.072951  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:47.164423  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:47.471232  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:47.561723  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:47.664038  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:47.971270  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:48.071786  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:48.163686  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:48.471336  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:48.560822  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:48.663757  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:48.971328  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:49.072463  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:49.164533  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:49.166752  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:49.471099  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:49.561783  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:49.664190  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:49.970445  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:50.061306  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:50.164637  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:50.471009  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:50.561219  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:50.664148  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:50.971530  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:51.072842  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:51.164105  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:51.471202  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:51.561253  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:51.665220  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:51.668404  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:51.971071  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:52.072052  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:52.164149  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:52.470257  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:52.561883  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:52.663695  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:52.970649  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:53.061766  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:53.164768  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:53.470620  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:53.561248  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:53.665611  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:53.971016  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:54.062562  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:54.163544  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:54.165160  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:54.471067  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:54.562262  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:54.664297  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:54.972022  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:55.072859  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:55.164482  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:55.471483  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:55.571635  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:55.672167  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:55.970581  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:56.061517  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:56.163692  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:56.165852  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:56.470891  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:56.560840  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:56.664424  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:56.971609  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:57.071944  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:57.172129  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:57.471492  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:57.561328  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:57.664371  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:57.971765  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:58.061136  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:58.164717  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:58.167169  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:39:58.470811  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:58.561875  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:58.664227  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:58.971242  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:59.071723  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:59.173224  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:59.470583  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:39:59.561582  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:39:59.663620  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:39:59.974228  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:00.075968  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:00.181322  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:00.182002  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:00.557628  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:00.560801  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:00.664499  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:00.972439  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:01.073207  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:01.165021  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:01.470938  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:01.562561  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:01.664967  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:01.972110  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:02.073252  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:02.174502  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:02.471130  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:02.561466  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:02.665135  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:02.667591  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:02.972192  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:03.061318  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:03.165767  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:03.470986  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:03.572154  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:03.664213  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:03.971487  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:04.069612  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:04.165226  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:04.471284  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:04.561631  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:04.664524  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:04.971508  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:05.073253  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:05.164360  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:05.166454  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:05.470885  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:05.562638  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:05.664366  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:05.971238  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:06.061221  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:06.164909  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:06.471155  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:06.571876  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:06.664237  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:06.971428  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:07.061486  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:07.163639  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:07.470523  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:07.561415  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:07.664399  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:07.666587  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:07.971118  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:08.061747  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:08.163808  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:08.471246  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:08.561010  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:08.664428  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:08.971582  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:09.062098  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:09.164757  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:09.471103  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:09.561332  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:09.663639  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:09.970712  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:10.061070  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:10.164198  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:10.166333  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:10.470978  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:10.561251  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:10.663441  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:10.970760  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:11.061703  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:11.163861  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:11.471220  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:11.561820  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:11.664502  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:11.971329  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:12.061290  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:12.164136  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:12.471314  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:12.562162  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:12.664543  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:12.666877  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:12.971802  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:13.061379  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:13.164009  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:13.472479  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:13.562232  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:13.664301  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:13.970923  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:14.061876  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:14.164543  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:14.471702  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:14.561707  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:14.664669  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:14.971484  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:15.072515  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:15.163628  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:15.165978  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:15.471504  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:15.561274  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:15.664049  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:15.971437  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:16.061094  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:16.164746  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:16.471713  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:16.561773  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:16.664010  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:16.971221  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:17.061407  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:17.163749  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:17.166163  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:17.471449  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:17.561476  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:17.663628  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:17.971714  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:18.060744  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:18.163928  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:18.470532  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:18.561315  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:18.663470  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:18.971755  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:19.073190  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:19.166677  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:19.173869  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:19.471193  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:19.561520  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:19.664041  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:19.971753  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:20.061098  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:20.164104  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:20.470553  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:20.561338  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:20.663723  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:20.971982  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:21.060631  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:21.164519  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:21.166782  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:21.471230  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:21.560804  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:21.663752  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:21.970837  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:22.061546  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:22.164273  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:22.471264  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:22.561425  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:22.663649  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:22.971476  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:23.061583  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:23.163873  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:23.471076  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:23.561162  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:23.664420  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:23.666843  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:23.971320  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:24.072435  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:24.164277  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:24.470642  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:24.560842  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:24.663920  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:24.973709  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:25.076391  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:25.163684  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:25.470529  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:25.561505  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:25.663906  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:25.971534  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:26.072707  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:26.163741  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:26.166287  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:26.470597  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:26.561589  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:26.664129  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:26.970465  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:27.061507  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:27.163899  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:27.471149  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:27.561493  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:27.663736  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:27.971115  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:28.061301  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:28.163891  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:28.166793  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:28.471391  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:28.560643  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:28.664060  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:28.981446  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:29.081852  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:29.164091  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:29.470388  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:29.561673  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:29.663920  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:29.973060  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:30.060543  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:30.163606  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:30.471522  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:30.560621  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:30.664061  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:30.666611  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:30.990451  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:31.091406  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:31.164809  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:31.470756  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:31.561714  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:31.663874  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:31.971016  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:32.061564  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:32.164859  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:32.471579  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:32.561549  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:32.664019  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:32.970709  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:33.071305  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:33.164531  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:33.167039  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:33.470693  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:33.561678  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:33.664618  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:34.007171  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:34.061865  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:34.164041  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:34.470895  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:34.562349  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:34.663767  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:34.971330  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:35.061022  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:35.164413  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:35.471061  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:35.560933  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0317 12:40:35.664236  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:35.666602  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:35.987282  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:36.061034  455052 kapi.go:107] duration metric: took 53.503416924s to wait for kubernetes.io/minikube-addons=registry ...
	I0317 12:40:36.164007  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:36.470388  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:36.664609  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:36.971153  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:37.164856  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:37.470875  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:37.663839  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:37.970590  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:38.164550  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:38.166315  455052 pod_ready.go:103] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"False"
	I0317 12:40:38.471165  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:38.663884  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:38.970790  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:39.173617  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:39.176004  455052 pod_ready.go:93] pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.176032  455052 pod_ready.go:82] duration metric: took 1m6.015822117s for pod "amd-gpu-device-plugin-vshjt" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.176044  455052 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-d2bx4" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.192981  455052 pod_ready.go:93] pod "coredns-668d6bf9bc-d2bx4" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.193007  455052 pod_ready.go:82] duration metric: took 16.956484ms for pod "coredns-668d6bf9bc-d2bx4" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.193018  455052 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.195263  455052 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gf4gw" not found
	I0317 12:40:39.195295  455052 pod_ready.go:82] duration metric: took 2.270605ms for pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace to be "Ready" ...
	E0317 12:40:39.195306  455052 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-gf4gw" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-gf4gw" not found
	I0317 12:40:39.195313  455052 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.200296  455052 pod_ready.go:93] pod "etcd-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.200357  455052 pod_ready.go:82] duration metric: took 5.035892ms for pod "etcd-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.200375  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.205221  455052 pod_ready.go:93] pod "kube-apiserver-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.205243  455052 pod_ready.go:82] duration metric: took 4.860483ms for pod "kube-apiserver-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.205253  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.364929  455052 pod_ready.go:93] pod "kube-controller-manager-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.364957  455052 pod_ready.go:82] duration metric: took 159.696703ms for pod "kube-controller-manager-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.364970  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dd72m" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.471469  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:39.664758  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:39.764524  455052 pod_ready.go:93] pod "kube-proxy-dd72m" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:39.764556  455052 pod_ready.go:82] duration metric: took 399.576924ms for pod "kube-proxy-dd72m" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.764569  455052 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:39.970483  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:40.164026  455052 pod_ready.go:93] pod "kube-scheduler-addons-012219" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:40.164055  455052 pod_ready.go:82] duration metric: took 399.477967ms for pod "kube-scheduler-addons-012219" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:40.164065  455052 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s96nr" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:40.164021  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:40.565634  455052 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-s96nr" in "kube-system" namespace has status "Ready":"True"
	I0317 12:40:40.565663  455052 pod_ready.go:82] duration metric: took 401.590789ms for pod "nvidia-device-plugin-daemonset-s96nr" in "kube-system" namespace to be "Ready" ...
	I0317 12:40:40.565674  455052 pod_ready.go:39] duration metric: took 1m7.497617906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:40:40.565698  455052 api_server.go:52] waiting for apiserver process to appear ...
	I0317 12:40:40.565750  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:40.565773  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:40:40.565833  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:40:40.606351  455052 cri.go:89] found id: "bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:40.606386  455052 cri.go:89] found id: ""
	I0317 12:40:40.606397  455052 logs.go:282] 1 containers: [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983]
	I0317 12:40:40.606449  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.610255  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:40:40.610344  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:40:40.646690  455052 cri.go:89] found id: "0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:40.646721  455052 cri.go:89] found id: ""
	I0317 12:40:40.646732  455052 logs.go:282] 1 containers: [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4]
	I0317 12:40:40.646797  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.650751  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:40:40.650819  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:40:40.664837  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:40.694081  455052 cri.go:89] found id: "9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:40.694109  455052 cri.go:89] found id: ""
	I0317 12:40:40.694120  455052 logs.go:282] 1 containers: [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056]
	I0317 12:40:40.694186  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.698401  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:40:40.698484  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:40:40.736966  455052 cri.go:89] found id: "5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:40.736995  455052 cri.go:89] found id: ""
	I0317 12:40:40.737006  455052 logs.go:282] 1 containers: [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42]
	I0317 12:40:40.737055  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.741396  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:40:40.741478  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:40:40.780872  455052 cri.go:89] found id: "d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:40.780909  455052 cri.go:89] found id: ""
	I0317 12:40:40.780917  455052 logs.go:282] 1 containers: [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3]
	I0317 12:40:40.780965  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.785035  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:40:40.785132  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:40:40.822415  455052 cri.go:89] found id: "379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:40.822441  455052 cri.go:89] found id: ""
	I0317 12:40:40.822450  455052 logs.go:282] 1 containers: [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8]
	I0317 12:40:40.822502  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.826522  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:40:40.826592  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:40:40.863344  455052 cri.go:89] found id: "9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:40.863373  455052 cri.go:89] found id: ""
	I0317 12:40:40.863384  455052 logs.go:282] 1 containers: [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649]
	I0317 12:40:40.863447  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:40.867562  455052 logs.go:123] Gathering logs for container status ...
	I0317 12:40:40.867591  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:40:40.911123  455052 logs.go:123] Gathering logs for kubelet ...
	I0317 12:40:40.911167  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:40:40.985490  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:41.012076  455052 logs.go:123] Gathering logs for kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] ...
	I0317 12:40:41.012172  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:41.076592  455052 logs.go:123] Gathering logs for coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] ...
	I0317 12:40:41.076640  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:41.121182  455052 logs.go:123] Gathering logs for kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] ...
	I0317 12:40:41.121224  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:41.164748  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:41.188610  455052 logs.go:123] Gathering logs for dmesg ...
	I0317 12:40:41.188658  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:40:41.216301  455052 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:40:41.216368  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:40:41.454361  455052 logs.go:123] Gathering logs for etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] ...
	I0317 12:40:41.454423  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:41.471030  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:41.502158  455052 logs.go:123] Gathering logs for kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] ...
	I0317 12:40:41.502206  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:41.546833  455052 logs.go:123] Gathering logs for kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] ...
	I0317 12:40:41.546886  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:41.586006  455052 logs.go:123] Gathering logs for kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] ...
	I0317 12:40:41.586050  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:41.626915  455052 logs.go:123] Gathering logs for containerd ...
	I0317 12:40:41.626946  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:40:41.664917  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:41.971006  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:42.163415  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:42.470573  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:42.664852  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:42.971232  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:43.165271  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:43.471719  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:43.663641  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:44.046664  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:44.164908  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:44.197012  455052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:40:44.213431  455052 api_server.go:72] duration metric: took 1m13.569048658s to wait for apiserver process to appear ...
	I0317 12:40:44.213465  455052 api_server.go:88] waiting for apiserver healthz status ...
	I0317 12:40:44.213525  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:40:44.213592  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:40:44.250956  455052 cri.go:89] found id: "bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:44.250984  455052 cri.go:89] found id: ""
	I0317 12:40:44.250991  455052 logs.go:282] 1 containers: [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983]
	I0317 12:40:44.251037  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.255218  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:40:44.255300  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:40:44.292739  455052 cri.go:89] found id: "0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:44.292764  455052 cri.go:89] found id: ""
	I0317 12:40:44.292773  455052 logs.go:282] 1 containers: [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4]
	I0317 12:40:44.292837  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.297234  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:40:44.297309  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:40:44.334012  455052 cri.go:89] found id: "9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:44.334037  455052 cri.go:89] found id: ""
	I0317 12:40:44.334045  455052 logs.go:282] 1 containers: [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056]
	I0317 12:40:44.334109  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.339441  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:40:44.339535  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:40:44.377185  455052 cri.go:89] found id: "5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:44.377208  455052 cri.go:89] found id: ""
	I0317 12:40:44.377216  455052 logs.go:282] 1 containers: [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42]
	I0317 12:40:44.377270  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.381213  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:40:44.381307  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:40:44.419209  455052 cri.go:89] found id: "d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:44.419236  455052 cri.go:89] found id: ""
	I0317 12:40:44.419246  455052 logs.go:282] 1 containers: [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3]
	I0317 12:40:44.419304  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.423259  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:40:44.423334  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:40:44.461215  455052 cri.go:89] found id: "379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:44.461241  455052 cri.go:89] found id: ""
	I0317 12:40:44.461250  455052 logs.go:282] 1 containers: [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8]
	I0317 12:40:44.461313  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.465079  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:40:44.465172  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:40:44.470298  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:44.505756  455052 cri.go:89] found id: "9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:44.505789  455052 cri.go:89] found id: ""
	I0317 12:40:44.505800  455052 logs.go:282] 1 containers: [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649]
	I0317 12:40:44.505862  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:44.510159  455052 logs.go:123] Gathering logs for kubelet ...
	I0317 12:40:44.510194  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:40:44.612068  455052 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:40:44.612121  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:40:44.665305  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:44.711514  455052 logs.go:123] Gathering logs for kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] ...
	I0317 12:40:44.711552  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:44.774357  455052 logs.go:123] Gathering logs for coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] ...
	I0317 12:40:44.774406  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:44.813382  455052 logs.go:123] Gathering logs for kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] ...
	I0317 12:40:44.813419  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:44.863365  455052 logs.go:123] Gathering logs for kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] ...
	I0317 12:40:44.863421  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:44.905079  455052 logs.go:123] Gathering logs for kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] ...
	I0317 12:40:44.905112  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:44.970391  455052 logs.go:123] Gathering logs for container status ...
	I0317 12:40:44.970440  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:40:44.971654  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:45.078232  455052 logs.go:123] Gathering logs for dmesg ...
	I0317 12:40:45.078288  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:40:45.106924  455052 logs.go:123] Gathering logs for etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] ...
	I0317 12:40:45.106966  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:45.164276  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:45.188075  455052 logs.go:123] Gathering logs for kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] ...
	I0317 12:40:45.188122  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:45.295976  455052 logs.go:123] Gathering logs for containerd ...
	I0317 12:40:45.296037  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:40:45.471680  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:45.664738  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:45.972089  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:46.166202  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:46.472051  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:46.664657  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:46.970725  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:47.164553  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:47.471404  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:47.664810  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:47.873980  455052 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0317 12:40:47.878931  455052 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0317 12:40:47.879917  455052 api_server.go:141] control plane version: v1.32.2
	I0317 12:40:47.879944  455052 api_server.go:131] duration metric: took 3.666470439s to wait for apiserver health ...
	I0317 12:40:47.879951  455052 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 12:40:47.879974  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:40:47.880026  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:40:47.918549  455052 cri.go:89] found id: "bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:47.918579  455052 cri.go:89] found id: ""
	I0317 12:40:47.918589  455052 logs.go:282] 1 containers: [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983]
	I0317 12:40:47.918637  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:47.922513  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:40:47.922593  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:40:47.966599  455052 cri.go:89] found id: "0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:47.966717  455052 cri.go:89] found id: ""
	I0317 12:40:47.966734  455052 logs.go:282] 1 containers: [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4]
	I0317 12:40:47.966814  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:47.972421  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:47.974109  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:40:47.974185  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:40:48.083818  455052 cri.go:89] found id: "9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:48.083846  455052 cri.go:89] found id: ""
	I0317 12:40:48.083857  455052 logs.go:282] 1 containers: [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056]
	I0317 12:40:48.083929  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.088505  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:40:48.088582  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:40:48.164562  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:48.182882  455052 cri.go:89] found id: "5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:48.182907  455052 cri.go:89] found id: ""
	I0317 12:40:48.182917  455052 logs.go:282] 1 containers: [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42]
	I0317 12:40:48.182973  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.187292  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:40:48.187364  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:40:48.266843  455052 cri.go:89] found id: "d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:48.266868  455052 cri.go:89] found id: ""
	I0317 12:40:48.266876  455052 logs.go:282] 1 containers: [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3]
	I0317 12:40:48.266924  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.271302  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:40:48.271366  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:40:48.362922  455052 cri.go:89] found id: "379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:48.362948  455052 cri.go:89] found id: ""
	I0317 12:40:48.362958  455052 logs.go:282] 1 containers: [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8]
	I0317 12:40:48.363018  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.366903  455052 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:40:48.366987  455052 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:40:48.445208  455052 cri.go:89] found id: "9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:48.445239  455052 cri.go:89] found id: ""
	I0317 12:40:48.445250  455052 logs.go:282] 1 containers: [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649]
	I0317 12:40:48.445316  455052 ssh_runner.go:195] Run: which crictl
	I0317 12:40:48.449909  455052 logs.go:123] Gathering logs for kubelet ...
	I0317 12:40:48.449941  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:40:48.470951  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:48.564227  455052 logs.go:123] Gathering logs for dmesg ...
	I0317 12:40:48.564287  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:40:48.594274  455052 logs.go:123] Gathering logs for etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] ...
	I0317 12:40:48.594327  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4"
	I0317 12:40:48.666652  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:48.687502  455052 logs.go:123] Gathering logs for coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] ...
	I0317 12:40:48.687545  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056"
	I0317 12:40:48.866713  455052 logs.go:123] Gathering logs for kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] ...
	I0317 12:40:48.866758  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8"
	I0317 12:40:48.974564  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:49.093453  455052 logs.go:123] Gathering logs for container status ...
	I0317 12:40:49.093505  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:40:49.164436  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:49.278407  455052 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:40:49.278448  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:40:49.471253  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:49.656290  455052 logs.go:123] Gathering logs for kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] ...
	I0317 12:40:49.656363  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983"
	I0317 12:40:49.666846  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:49.971519  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:49.985522  455052 logs.go:123] Gathering logs for kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] ...
	I0317 12:40:49.985575  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42"
	I0317 12:40:50.080623  455052 logs.go:123] Gathering logs for kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] ...
	I0317 12:40:50.080664  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3"
	I0317 12:40:50.150398  455052 logs.go:123] Gathering logs for kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] ...
	I0317 12:40:50.150435  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649"
	I0317 12:40:50.165326  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:50.190479  455052 logs.go:123] Gathering logs for containerd ...
	I0317 12:40:50.190509  455052 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:40:50.501424  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:50.664829  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:51.046634  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:51.165068  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:51.472476  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:51.664153  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:51.971633  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:52.164477  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:52.471024  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:52.664532  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:52.765578  455052 system_pods.go:59] 19 kube-system pods found
	I0317 12:40:52.765623  455052 system_pods.go:61] "amd-gpu-device-plugin-vshjt" [f90dc780-3781-4dfa-aa72-9f01de540522] Running
	I0317 12:40:52.765632  455052 system_pods.go:61] "coredns-668d6bf9bc-d2bx4" [3984c722-20f8-4593-8acc-69f7a96879cc] Running
	I0317 12:40:52.765637  455052 system_pods.go:61] "csi-hostpath-attacher-0" [6409109a-02f5-4560-a0ce-ff758742667a] Running
	I0317 12:40:52.765642  455052 system_pods.go:61] "csi-hostpath-resizer-0" [db7fcb3f-a582-496d-8f39-b4b58ac628a9] Running
	I0317 12:40:52.765654  455052 system_pods.go:61] "csi-hostpathplugin-dxflx" [e4429700-36d8-4fe3-8ee4-ec430215ad55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0317 12:40:52.765663  455052 system_pods.go:61] "etcd-addons-012219" [02330569-8d20-41f9-b759-63f8904f2f4b] Running
	I0317 12:40:52.765675  455052 system_pods.go:61] "kindnet-cz7dg" [3e28249d-348a-4e40-b2b3-8b46b677ac10] Running
	I0317 12:40:52.765680  455052 system_pods.go:61] "kube-apiserver-addons-012219" [d053686d-1bfb-4c4f-83a6-9550a5c50bef] Running
	I0317 12:40:52.765690  455052 system_pods.go:61] "kube-controller-manager-addons-012219" [cfe19195-6038-44ef-93e8-2a3a1fa0eeb6] Running
	I0317 12:40:52.765704  455052 system_pods.go:61] "kube-ingress-dns-minikube" [8cd48e7c-55b2-4237-b945-d6e7be8d3040] Running
	I0317 12:40:52.765710  455052 system_pods.go:61] "kube-proxy-dd72m" [3c1ba3e7-f0a0-4520-ac21-293d84b96937] Running
	I0317 12:40:52.765714  455052 system_pods.go:61] "kube-scheduler-addons-012219" [f2a8c619-ebae-4ab1-9e80-476b5bc94a7c] Running
	I0317 12:40:52.765722  455052 system_pods.go:61] "metrics-server-7fbb699795-rmd9f" [457e13af-aba0-4869-9953-d240bdcf8c93] Running
	I0317 12:40:52.765727  455052 system_pods.go:61] "nvidia-device-plugin-daemonset-s96nr" [dd2959e8-cb33-4011-825c-beffbbfe67f2] Running
	I0317 12:40:52.765735  455052 system_pods.go:61] "registry-6c88467877-qxwgl" [455262b9-8f7c-405f-8f6a-e11619b4a82b] Running
	I0317 12:40:52.765740  455052 system_pods.go:61] "registry-proxy-6mr4n" [1ff4a6b3-772a-4bb4-b071-5fda919d74bb] Running
	I0317 12:40:52.765749  455052 system_pods.go:61] "snapshot-controller-68b874b76f-kqqj4" [4c44d0a7-10b5-4560-b08a-547f48a9d788] Running
	I0317 12:40:52.765754  455052 system_pods.go:61] "snapshot-controller-68b874b76f-vg6qw" [83b1a84d-5b5d-4f61-9899-b115352819b6] Running
	I0317 12:40:52.765762  455052 system_pods.go:61] "storage-provisioner" [2308e1c7-7aa6-49b3-ac63-c49fdf64fced] Running
	I0317 12:40:52.765771  455052 system_pods.go:74] duration metric: took 4.885812296s to wait for pod list to return data ...
	I0317 12:40:52.765785  455052 default_sa.go:34] waiting for default service account to be created ...
	I0317 12:40:52.768498  455052 default_sa.go:45] found service account: "default"
	I0317 12:40:52.768530  455052 default_sa.go:55] duration metric: took 2.736413ms for default service account to be created ...
	I0317 12:40:52.768543  455052 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 12:40:52.772352  455052 system_pods.go:86] 19 kube-system pods found
	I0317 12:40:52.772393  455052 system_pods.go:89] "amd-gpu-device-plugin-vshjt" [f90dc780-3781-4dfa-aa72-9f01de540522] Running
	I0317 12:40:52.772404  455052 system_pods.go:89] "coredns-668d6bf9bc-d2bx4" [3984c722-20f8-4593-8acc-69f7a96879cc] Running
	I0317 12:40:52.772412  455052 system_pods.go:89] "csi-hostpath-attacher-0" [6409109a-02f5-4560-a0ce-ff758742667a] Running
	I0317 12:40:52.772417  455052 system_pods.go:89] "csi-hostpath-resizer-0" [db7fcb3f-a582-496d-8f39-b4b58ac628a9] Running
	I0317 12:40:52.772427  455052 system_pods.go:89] "csi-hostpathplugin-dxflx" [e4429700-36d8-4fe3-8ee4-ec430215ad55] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0317 12:40:52.772438  455052 system_pods.go:89] "etcd-addons-012219" [02330569-8d20-41f9-b759-63f8904f2f4b] Running
	I0317 12:40:52.772445  455052 system_pods.go:89] "kindnet-cz7dg" [3e28249d-348a-4e40-b2b3-8b46b677ac10] Running
	I0317 12:40:52.772453  455052 system_pods.go:89] "kube-apiserver-addons-012219" [d053686d-1bfb-4c4f-83a6-9550a5c50bef] Running
	I0317 12:40:52.772459  455052 system_pods.go:89] "kube-controller-manager-addons-012219" [cfe19195-6038-44ef-93e8-2a3a1fa0eeb6] Running
	I0317 12:40:52.772469  455052 system_pods.go:89] "kube-ingress-dns-minikube" [8cd48e7c-55b2-4237-b945-d6e7be8d3040] Running
	I0317 12:40:52.772474  455052 system_pods.go:89] "kube-proxy-dd72m" [3c1ba3e7-f0a0-4520-ac21-293d84b96937] Running
	I0317 12:40:52.772482  455052 system_pods.go:89] "kube-scheduler-addons-012219" [f2a8c619-ebae-4ab1-9e80-476b5bc94a7c] Running
	I0317 12:40:52.772488  455052 system_pods.go:89] "metrics-server-7fbb699795-rmd9f" [457e13af-aba0-4869-9953-d240bdcf8c93] Running
	I0317 12:40:52.772500  455052 system_pods.go:89] "nvidia-device-plugin-daemonset-s96nr" [dd2959e8-cb33-4011-825c-beffbbfe67f2] Running
	I0317 12:40:52.772507  455052 system_pods.go:89] "registry-6c88467877-qxwgl" [455262b9-8f7c-405f-8f6a-e11619b4a82b] Running
	I0317 12:40:52.772513  455052 system_pods.go:89] "registry-proxy-6mr4n" [1ff4a6b3-772a-4bb4-b071-5fda919d74bb] Running
	I0317 12:40:52.772520  455052 system_pods.go:89] "snapshot-controller-68b874b76f-kqqj4" [4c44d0a7-10b5-4560-b08a-547f48a9d788] Running
	I0317 12:40:52.772525  455052 system_pods.go:89] "snapshot-controller-68b874b76f-vg6qw" [83b1a84d-5b5d-4f61-9899-b115352819b6] Running
	I0317 12:40:52.772538  455052 system_pods.go:89] "storage-provisioner" [2308e1c7-7aa6-49b3-ac63-c49fdf64fced] Running
	I0317 12:40:52.772548  455052 system_pods.go:126] duration metric: took 3.997984ms to wait for k8s-apps to be running ...
	I0317 12:40:52.772565  455052 system_svc.go:44] waiting for kubelet service to be running ....
	I0317 12:40:52.772634  455052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 12:40:52.787084  455052 system_svc.go:56] duration metric: took 14.507575ms WaitForService to wait for kubelet
	I0317 12:40:52.787128  455052 kubeadm.go:582] duration metric: took 1m22.14274535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:40:52.787164  455052 node_conditions.go:102] verifying NodePressure condition ...
	I0317 12:40:52.790315  455052 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0317 12:40:52.790344  455052 node_conditions.go:123] node cpu capacity is 8
	I0317 12:40:52.790361  455052 node_conditions.go:105] duration metric: took 3.191982ms to run NodePressure ...
	I0317 12:40:52.790375  455052 start.go:241] waiting for startup goroutines ...
	I0317 12:40:52.970590  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:53.163873  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:53.471714  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:53.663961  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:53.971638  455052 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0317 12:40:54.165249  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:54.471187  455052 kapi.go:107] duration metric: took 1m14.504065676s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0317 12:40:54.663724  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:55.181177  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:55.664966  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:56.164506  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:56.664667  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0317 12:40:57.164624  455052 kapi.go:107] duration metric: took 1m14.004040538s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0317 12:41:06.981158  455052 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0317 12:41:06.981192  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:07.479529  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:07.980481  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:08.479897  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:08.979569  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:09.480236  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:09.979644  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:10.480729  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:10.979778  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:11.480055  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:11.979330  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:12.479999  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:12.979564  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:13.479411  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:13.979645  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:14.479839  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:14.979566  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:15.480082  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:15.979724  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:16.479924  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:16.979445  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:17.480370  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:17.980036  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:18.479816  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:18.979426  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:19.480457  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:19.980135  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:20.479132  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:20.980655  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:21.480199  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:21.979202  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:22.479369  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:22.980251  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:23.480213  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:23.979330  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:24.479078  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:24.979331  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:25.481117  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:25.980111  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:26.479891  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:26.979293  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:27.479178  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:27.980039  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:28.479214  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:28.979143  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:29.479377  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:29.980164  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:30.479590  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:30.980763  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:31.479508  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:31.980501  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:32.479742  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:32.979400  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:33.480015  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:33.979488  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:34.479529  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:34.979633  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:35.480481  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:35.979528  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:36.480373  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:36.979487  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:37.480121  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:37.979618  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:38.478950  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:38.980157  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:39.479388  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:39.979684  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:40.480541  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:40.979353  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:41.479656  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:41.980519  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:42.480066  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:42.979303  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:43.479590  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:43.980298  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:44.479479  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:44.979931  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:45.479240  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:45.979460  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:46.480270  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:46.979651  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:47.480643  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:47.979905  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:48.479669  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:48.980409  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:49.479066  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:49.979654  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:50.480200  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:50.980176  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:51.479890  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:51.979954  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:52.479149  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:52.979163  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:53.479477  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:53.979835  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:54.479007  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:54.979434  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:55.480051  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:55.979962  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:56.479685  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:56.980475  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:57.479720  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:57.979324  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:58.479969  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:58.978987  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:59.479112  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:41:59.979505  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:00.479949  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:00.979559  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:01.480280  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:01.980377  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:02.479976  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:02.979939  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:03.479966  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:03.980399  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:04.479483  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:04.979513  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:05.480468  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:05.980122  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:06.479553  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:06.980746  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:07.480116  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:07.979701  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:08.479565  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:08.980110  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:09.479313  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:09.980207  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:10.479669  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:10.980481  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:11.480544  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:11.980262  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:12.479701  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:12.980212  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:13.480124  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:13.979706  455052 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0317 12:42:14.480678  455052 kapi.go:107] duration metric: took 2m29.504600411s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0317 12:42:14.482276  455052 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-012219 cluster.
	I0317 12:42:14.483771  455052 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0317 12:42:14.485152  455052 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0317 12:42:14.486680  455052 out.go:177] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, volcano, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0317 12:42:14.487909  455052 addons.go:514] duration metric: took 2m43.843476509s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner ingress-dns default-storageclass volcano inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0317 12:42:14.487990  455052 start.go:246] waiting for cluster config update ...
	I0317 12:42:14.488027  455052 start.go:255] writing updated cluster config ...
	I0317 12:42:14.488433  455052 ssh_runner.go:195] Run: rm -f paused
	I0317 12:42:14.545367  455052 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 12:42:14.547125  455052 out.go:177] * Done! kubectl is now configured to use "addons-012219" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	72f0986b33bff       56cc512116c8f       3 minutes ago       Running             busybox                   0                   c2583758c78ae       busybox
	f883a246b949a       ee44bc2368033       5 minutes ago       Running             controller                0                   33fb94840828b       ingress-nginx-controller-56d7c84fd4-4z52c
	3b2f143a08370       a62eeff05ba51       5 minutes ago       Exited              patch                     0                   8be1f7fa66df6       ingress-nginx-admission-patch-t76b7
	2e445477a0d3a       e16d1e3a10667       5 minutes ago       Running             local-path-provisioner    0                   1cad4a4a9bc31       local-path-provisioner-76f89f99b5-n9bmg
	a3b120baf6d20       a62eeff05ba51       5 minutes ago       Exited              create                    0                   bb57596d33854       ingress-nginx-admission-create-l6q8w
	9af85dc8bdce7       c69fa2e9cbf5f       6 minutes ago       Running             coredns                   0                   fdb2ccd4b262c       coredns-668d6bf9bc-d2bx4
	c9fa71bfc47e4       30dd67412fdea       6 minutes ago       Running             minikube-ingress-dns      0                   11e8588abc22d       kube-ingress-dns-minikube
	9b64fcbeb6014       df3849d954c98       6 minutes ago       Running             kindnet-cni               0                   75ed382f3859f       kindnet-cz7dg
	7e31db05a70a8       6e38f40d628db       6 minutes ago       Running             storage-provisioner       0                   73e2163ca213c       storage-provisioner
	d3a2f527a6876       f1332858868e1       6 minutes ago       Running             kube-proxy                0                   0356e8b8272c6       kube-proxy-dd72m
	bb5f00b762560       85b7a174738ba       7 minutes ago       Running             kube-apiserver            0                   37827ade0909f       kube-apiserver-addons-012219
	0c8a01ff0ac04       a9e7e6b294baf       7 minutes ago       Running             etcd                      0                   5301a69037bec       etcd-addons-012219
	379f28506a876       b6a454c5a800d       7 minutes ago       Running             kube-controller-manager   0                   e29b57dbb448a       kube-controller-manager-addons-012219
	5e2a09620775b       d8e673e7c9983       7 minutes ago       Running             kube-scheduler            0                   07e0a925b2596       kube-scheduler-addons-012219
	
	
	==> containerd <==
	Mar 17 12:44:26 addons-012219 containerd[864]: time="2025-03-17T12:44:26.742186977Z" level=info msg="RemovePodSandbox for \"036ded01f6d7829ea2d3104cf86ad7f5527275ff01aa8f651dcc9e04571a6c9e\""
	Mar 17 12:44:26 addons-012219 containerd[864]: time="2025-03-17T12:44:26.742239365Z" level=info msg="Forcibly stopping sandbox \"036ded01f6d7829ea2d3104cf86ad7f5527275ff01aa8f651dcc9e04571a6c9e\""
	Mar 17 12:44:26 addons-012219 containerd[864]: time="2025-03-17T12:44:26.751590415Z" level=info msg="TearDown network for sandbox \"036ded01f6d7829ea2d3104cf86ad7f5527275ff01aa8f651dcc9e04571a6c9e\" successfully"
	Mar 17 12:44:26 addons-012219 containerd[864]: time="2025-03-17T12:44:26.756247973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"036ded01f6d7829ea2d3104cf86ad7f5527275ff01aa8f651dcc9e04571a6c9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Mar 17 12:44:26 addons-012219 containerd[864]: time="2025-03-17T12:44:26.756400988Z" level=info msg="RemovePodSandbox \"036ded01f6d7829ea2d3104cf86ad7f5527275ff01aa8f651dcc9e04571a6c9e\" returns successfully"
	Mar 17 12:44:29 addons-012219 containerd[864]: time="2025-03-17T12:44:29.988897466Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Mar 17 12:44:29 addons-012219 containerd[864]: time="2025-03-17T12:44:29.990851544Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:44:30 addons-012219 containerd[864]: time="2025-03-17T12:44:30.667618322Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:44:32 addons-012219 containerd[864]: time="2025-03-17T12:44:32.545833558Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 12:44:32 addons-012219 containerd[864]: time="2025-03-17T12:44:32.545919966Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10966"
	Mar 17 12:44:52 addons-012219 containerd[864]: time="2025-03-17T12:44:52.988925733Z" level=info msg="PullImage \"busybox:stable\""
	Mar 17 12:44:52 addons-012219 containerd[864]: time="2025-03-17T12:44:52.990686105Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:44:53 addons-012219 containerd[864]: time="2025-03-17T12:44:53.659933217Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:44:55 addons-012219 containerd[864]: time="2025-03-17T12:44:55.535974153Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 12:44:55 addons-012219 containerd[864]: time="2025-03-17T12:44:55.536066320Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=10979"
	Mar 17 12:45:14 addons-012219 containerd[864]: time="2025-03-17T12:45:14.989335913Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Mar 17 12:45:14 addons-012219 containerd[864]: time="2025-03-17T12:45:14.991443930Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:45:15 addons-012219 containerd[864]: time="2025-03-17T12:45:15.682943301Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:45:17 addons-012219 containerd[864]: time="2025-03-17T12:45:17.553229847Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 12:45:17 addons-012219 containerd[864]: time="2025-03-17T12:45:17.553293089Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Mar 17 12:46:15 addons-012219 containerd[864]: time="2025-03-17T12:46:15.989541497Z" level=info msg="PullImage \"busybox:stable\""
	Mar 17 12:46:15 addons-012219 containerd[864]: time="2025-03-17T12:46:15.991662561Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:46:16 addons-012219 containerd[864]: time="2025-03-17T12:46:16.679360036Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:46:18 addons-012219 containerd[864]: time="2025-03-17T12:46:18.946219239Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 12:46:18 addons-012219 containerd[864]: time="2025-03-17T12:46:18.946362922Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=21179"
	
	
	==> coredns [9af85dc8bdce7c7605798e49add631c328e501002f7088a1c154d4d7c6829056] <==
	[INFO] 10.244.0.16:34216 - 37652 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000188479s
	[INFO] 10.244.0.16:43420 - 27799 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004589602s
	[INFO] 10.244.0.16:43420 - 27455 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004741711s
	[INFO] 10.244.0.16:47013 - 51640 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003867777s
	[INFO] 10.244.0.16:47013 - 51331 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004475572s
	[INFO] 10.244.0.16:39077 - 47593 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003471984s
	[INFO] 10.244.0.16:39077 - 47344 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004574207s
	[INFO] 10.244.0.16:39183 - 16956 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144916s
	[INFO] 10.244.0.16:39183 - 17153 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000241434s
	[INFO] 10.244.0.26:34264 - 6039 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000313032s
	[INFO] 10.244.0.26:55315 - 55406 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000421144s
	[INFO] 10.244.0.26:46365 - 29788 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167696s
	[INFO] 10.244.0.26:40754 - 38194 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000205761s
	[INFO] 10.244.0.26:49806 - 38763 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009024s
	[INFO] 10.244.0.26:59680 - 32634 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111996s
	[INFO] 10.244.0.26:35656 - 5418 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007327485s
	[INFO] 10.244.0.26:50153 - 9679 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007599611s
	[INFO] 10.244.0.26:55286 - 36934 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006995331s
	[INFO] 10.244.0.26:55921 - 56025 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007813438s
	[INFO] 10.244.0.26:59506 - 19688 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004925485s
	[INFO] 10.244.0.26:42060 - 55729 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00572541s
	[INFO] 10.244.0.26:42368 - 10263 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002102149s
	[INFO] 10.244.0.26:57842 - 8006 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.002237615s
	[INFO] 10.244.0.31:36061 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000299324s
	[INFO] 10.244.0.31:44219 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202677s
	
	
	==> describe nodes <==
	Name:               addons-012219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-012219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=addons-012219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T12_39_26_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-012219
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:39:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-012219
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 12:46:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 12:44:01 +0000   Mon, 17 Mar 2025 12:39:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 12:44:01 +0000   Mon, 17 Mar 2025 12:39:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 12:44:01 +0000   Mon, 17 Mar 2025 12:39:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 12:44:01 +0000   Mon, 17 Mar 2025 12:39:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-012219
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 513f2da8e2ab4b528f20355459ada2cc
	  System UUID:                718990bd-83c1-42aa-9bb1-42fb8bb0fb09
	  Boot ID:                    40219139-515e-4d1c-86e4-bab1900bd49a
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-4z52c    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         6m47s
	  kube-system                 coredns-668d6bf9bc-d2bx4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m55s
	  kube-system                 etcd-addons-012219                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m1s
	  kube-system                 kindnet-cz7dg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m55s
	  kube-system                 kube-apiserver-addons-012219                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m1s
	  kube-system                 kube-controller-manager-addons-012219        200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m1s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m52s
	  kube-system                 kube-proxy-dd72m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 kube-scheduler-addons-012219                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	  local-path-storage          local-path-provisioner-76f89f99b5-n9bmg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 6m52s                kube-proxy       
	  Normal   NodeHasSufficientMemory  7m6s (x8 over 7m6s)  kubelet          Node addons-012219 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m6s (x8 over 7m6s)  kubelet          Node addons-012219 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m6s (x7 over 7m6s)  kubelet          Node addons-012219 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 7m1s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m1s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  7m                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m                   kubelet          Node addons-012219 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m                   kubelet          Node addons-012219 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m                   kubelet          Node addons-012219 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m57s                node-controller  Node addons-012219 event: Registered Node addons-012219 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +2.171804] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000008] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000005] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000004] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.047810] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000009] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000001] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000011] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000008] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[Mar17 12:32] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000007] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.043860] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000003] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	
	
	==> etcd [0c8a01ff0ac04a67f0ca7daa16df0b12a5be2d60bc00f609c695b24631e44cd4] <==
	{"level":"warn","ts":"2025-03-17T12:40:39.171483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.982678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:40:39.171532Z","caller":"traceutil/trace.go:171","msg":"trace[1706190551] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1241; }","duration":"193.080041ms","start":"2025-03-17T12:40:38.978439Z","end":"2025-03-17T12:40:39.171519Z","steps":["trace[1706190551] 'agreement among raft nodes before linearized reading'  (duration: 192.961286ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:40:39.171698Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.965713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-03-17T12:40:39.171794Z","caller":"traceutil/trace.go:171","msg":"trace[793474553] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1241; }","duration":"102.087943ms","start":"2025-03-17T12:40:39.069692Z","end":"2025-03-17T12:40:39.171780Z","steps":["trace[793474553] 'agreement among raft nodes before linearized reading'  (duration: 101.727159ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:40:41.445281Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.039026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/addons-012219\" limit:1 ","response":"range_response_count:1 size:555"}
	{"level":"info","ts":"2025-03-17T12:40:41.445349Z","caller":"traceutil/trace.go:171","msg":"trace[1550817072] range","detail":"{range_begin:/registry/leases/kube-node-lease/addons-012219; range_end:; response_count:1; response_revision:1251; }","duration":"129.145148ms","start":"2025-03-17T12:40:41.316187Z","end":"2025-03-17T12:40:41.445332Z","steps":["trace[1550817072] 'range keys from in-memory index tree'  (duration: 128.87378ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:41:21.112639Z","caller":"traceutil/trace.go:171","msg":"trace[1120577109] transaction","detail":"{read_only:false; response_revision:1413; number_of_response:1; }","duration":"105.170138ms","start":"2025-03-17T12:41:21.007444Z","end":"2025-03-17T12:41:21.112614Z","steps":["trace[1120577109] 'process raft request'  (duration: 105.008345ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:42:45.196439Z","caller":"traceutil/trace.go:171","msg":"trace[1142690297] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1660; }","duration":"109.500712ms","start":"2025-03-17T12:42:45.086909Z","end":"2025-03-17T12:42:45.196409Z","steps":["trace[1142690297] 'process raft request'  (duration: 109.140613ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:42:45.401890Z","caller":"traceutil/trace.go:171","msg":"trace[1069886062] linearizableReadLoop","detail":"{readStateIndex:1727; appliedIndex:1726; }","duration":"135.149903ms","start":"2025-03-17T12:42:45.266716Z","end":"2025-03-17T12:42:45.401866Z","steps":["trace[1069886062] 'read index received'  (duration: 60.1132ms)","trace[1069886062] 'applied index is now lower than readState.Index'  (duration: 75.035995ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:42:45.401909Z","caller":"traceutil/trace.go:171","msg":"trace[1395844034] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1663; }","duration":"135.750169ms","start":"2025-03-17T12:42:45.266137Z","end":"2025-03-17T12:42:45.401887Z","steps":["trace[1395844034] 'process raft request'  (duration: 60.729876ms)","trace[1395844034] 'compare'  (duration: 74.868841ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T12:42:45.402090Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.137849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2025-03-17T12:42:45.402137Z","caller":"traceutil/trace.go:171","msg":"trace[1936804532] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:1663; }","duration":"135.208194ms","start":"2025-03-17T12:42:45.266916Z","end":"2025-03-17T12:42:45.402125Z","steps":["trace[1936804532] 'agreement among raft nodes before linearized reading'  (duration: 135.123423ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:42:45.402088Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.220997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/volcano-system/volcano-scheduler-service\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:42:45.402284Z","caller":"traceutil/trace.go:171","msg":"trace[218181080] range","detail":"{range_begin:/registry/services/endpoints/volcano-system/volcano-scheduler-service; range_end:; response_count:0; response_revision:1663; }","duration":"135.453301ms","start":"2025-03-17T12:42:45.266818Z","end":"2025-03-17T12:42:45.402272Z","steps":["trace[218181080] 'agreement among raft nodes before linearized reading'  (duration: 135.204345ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:42:45.402089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.36442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/volcano-system/volcano-scheduler-service-6m88g\" limit:1 ","response":"range_response_count:1 size:1209"}
	{"level":"info","ts":"2025-03-17T12:42:45.402392Z","caller":"traceutil/trace.go:171","msg":"trace[1412987289] range","detail":"{range_begin:/registry/endpointslices/volcano-system/volcano-scheduler-service-6m88g; range_end:; response_count:1; response_revision:1663; }","duration":"135.692569ms","start":"2025-03-17T12:42:45.266691Z","end":"2025-03-17T12:42:45.402383Z","steps":["trace[1412987289] 'agreement among raft nodes before linearized reading'  (duration: 135.303131ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:42:45.745091Z","caller":"traceutil/trace.go:171","msg":"trace[1704665035] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1684; }","duration":"175.124333ms","start":"2025-03-17T12:42:45.569942Z","end":"2025-03-17T12:42:45.745066Z","steps":["trace[1704665035] 'process raft request'  (duration: 87.948789ms)","trace[1704665035] 'compare'  (duration: 86.814718ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:43:18.625186Z","caller":"traceutil/trace.go:171","msg":"trace[1253148108] linearizableReadLoop","detail":"{readStateIndex:1983; appliedIndex:1982; }","duration":"118.524333ms","start":"2025-03-17T12:43:18.506639Z","end":"2025-03-17T12:43:18.625163Z","steps":["trace[1253148108] 'read index received'  (duration: 60.110846ms)","trace[1253148108] 'applied index is now lower than readState.Index'  (duration: 58.412627ms)"],"step_count":2}
	{"level":"warn","ts":"2025-03-17T12:43:18.625392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.702389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-03-17T12:43:18.625403Z","caller":"traceutil/trace.go:171","msg":"trace[838863475] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1909; }","duration":"120.334731ms","start":"2025-03-17T12:43:18.505045Z","end":"2025-03-17T12:43:18.625380Z","steps":["trace[838863475] 'process raft request'  (duration: 61.76538ms)","trace[838863475] 'compare'  (duration: 58.213746ms)"],"step_count":2}
	{"level":"info","ts":"2025-03-17T12:43:18.625450Z","caller":"traceutil/trace.go:171","msg":"trace[1359405829] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:1909; }","duration":"118.802749ms","start":"2025-03-17T12:43:18.506634Z","end":"2025-03-17T12:43:18.625437Z","steps":["trace[1359405829] 'agreement among raft nodes before linearized reading'  (duration: 118.667931ms)"],"step_count":1}
	{"level":"warn","ts":"2025-03-17T12:43:18.625550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.875558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-6f7db97f95\" limit:1 ","response":"range_response_count:1 size:2926"}
	{"level":"warn","ts":"2025-03-17T12:43:18.625598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.878461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/nvidia-device-plugin-daemonset-s96nr\" limit:1 ","response":"range_response_count:1 size:4285"}
	{"level":"info","ts":"2025-03-17T12:43:18.625604Z","caller":"traceutil/trace.go:171","msg":"trace[60376047] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-6f7db97f95; range_end:; response_count:1; response_revision:1909; }","duration":"118.951477ms","start":"2025-03-17T12:43:18.506642Z","end":"2025-03-17T12:43:18.625593Z","steps":["trace[60376047] 'agreement among raft nodes before linearized reading'  (duration: 118.842099ms)"],"step_count":1}
	{"level":"info","ts":"2025-03-17T12:43:18.625627Z","caller":"traceutil/trace.go:171","msg":"trace[766425522] range","detail":"{range_begin:/registry/pods/kube-system/nvidia-device-plugin-daemonset-s96nr; range_end:; response_count:1; response_revision:1909; }","duration":"118.933146ms","start":"2025-03-17T12:43:18.506685Z","end":"2025-03-17T12:43:18.625618Z","steps":["trace[766425522] 'agreement among raft nodes before linearized reading'  (duration: 118.825396ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:46:26 up  2:28,  0 users,  load average: 0.32, 1.19, 2.00
	Linux addons-012219 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [9b64fcbeb6014a84017e01b17c16c6441e599c7b3f10d9b6d2654868922d1649] <==
	I0317 12:44:22.649227       1 main.go:301] handling current node
	I0317 12:44:32.646446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:44:32.646489       1 main.go:301] handling current node
	I0317 12:44:42.645524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:44:42.645590       1 main.go:301] handling current node
	I0317 12:44:52.646349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:44:52.646410       1 main.go:301] handling current node
	I0317 12:45:02.644981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:45:02.645046       1 main.go:301] handling current node
	I0317 12:45:12.650303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:45:12.650346       1 main.go:301] handling current node
	I0317 12:45:22.646411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:45:22.646467       1 main.go:301] handling current node
	I0317 12:45:32.648443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:45:32.648485       1 main.go:301] handling current node
	I0317 12:45:42.644999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:45:42.645068       1 main.go:301] handling current node
	I0317 12:45:52.646298       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:45:52.646371       1 main.go:301] handling current node
	I0317 12:46:02.646227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:46:02.646288       1 main.go:301] handling current node
	I0317 12:46:12.652471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:46:12.652646       1 main.go:301] handling current node
	I0317 12:46:22.646445       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 12:46:22.646499       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bb5f00b7625601d21e905147e28f325d6d2eb52f6d757f39f8ce61ff4f55c983] <==
	W0317 12:42:47.051524       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0317 12:42:47.365563       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0317 12:42:47.757473       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0317 12:43:03.635813       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58138: use of closed network connection
	E0317 12:43:03.818342       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:58162: use of closed network connection
	I0317 12:43:13.586843       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.198.122"}
	I0317 12:43:34.949156       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0317 12:43:41.483627       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0317 12:43:41.678398       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.52.27"}
	I0317 12:43:41.692581       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0317 12:43:42.711288       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0317 12:43:50.250114       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0317 12:44:15.694314       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.694381       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.708477       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.708564       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.709759       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.709811       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.745127       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.745201       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0317 12:44:15.767186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0317 12:44:15.767239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0317 12:44:16.709982       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0317 12:44:16.767220       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0317 12:44:16.945072       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [379f28506a87666447bb9e5a46766c5b8106bf905cd90914f25a4fdc6d2bdac8] <==
	E0317 12:46:00.142600       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:07.763628       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:07.764857       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0317 12:46:07.765835       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:07.765889       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:13.731604       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:13.732746       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0317 12:46:13.733609       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:13.733643       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:14.118368       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:14.119318       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="bus.volcano.sh/v1alpha1, Resource=commands"
	W0317 12:46:14.120276       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:14.120352       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:18.813904       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:18.815061       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0317 12:46:18.815957       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:18.815994       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:19.131834       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:19.133815       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobflows"
	W0317 12:46:19.135031       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:19.135083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0317 12:46:25.209016       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0317 12:46:25.210261       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=podgroups"
	W0317 12:46:25.211295       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0317 12:46:25.211337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d3a2f527a68764f7f040d4b49dce23f813fbc96471794730d235e72cb3e66af3] <==
	I0317 12:39:33.459942       1 server_linux.go:66] "Using iptables proxy"
	I0317 12:39:34.065102       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0317 12:39:34.065210       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 12:39:34.462554       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 12:39:34.462648       1 server_linux.go:170] "Using iptables Proxier"
	I0317 12:39:34.548519       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 12:39:34.548961       1 server.go:497] "Version info" version="v1.32.2"
	I0317 12:39:34.548980       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 12:39:34.563177       1 config.go:199] "Starting service config controller"
	I0317 12:39:34.563231       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 12:39:34.563268       1 config.go:105] "Starting endpoint slice config controller"
	I0317 12:39:34.563274       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 12:39:34.564040       1 config.go:329] "Starting node config controller"
	I0317 12:39:34.564055       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 12:39:34.664095       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 12:39:34.664167       1 shared_informer.go:320] Caches are synced for service config
	I0317 12:39:37.464457       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5e2a09620775b2c75311fe4e7c162d2c0f322d3fd4424587a9c48ee80c51dc42] <==
	W0317 12:39:23.464420       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 12:39:23.466171       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:23.466555       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 12:39:23.466702       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:23.466938       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 12:39:23.466981       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.268854       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 12:39:24.268916       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.269972       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 12:39:24.270018       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.278129       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:39:24.278189       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.344802       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 12:39:24.344855       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0317 12:39:24.390622       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0317 12:39:24.390672       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.425578       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 12:39:24.425627       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.573072       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 12:39:24.573157       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.590139       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 12:39:24.590216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:39:24.681289       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 12:39:24.681350       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0317 12:39:26.559813       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 12:44:55 addons-012219 kubelet[1625]: E0317 12:44:55.536402    1625 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Mar 17 12:44:55 addons-012219 kubelet[1625]: E0317 12:44:55.536472    1625 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Mar 17 12:44:55 addons-012219 kubelet[1625]: E0317 12:44:55.536594    1625 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nft6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(288fd0b6-8224-4d26-9aa9-20812cfdeca9): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 12:44:55 addons-012219 kubelet[1625]: E0317 12:44:55.537847    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:44:59 addons-012219 kubelet[1625]: E0317 12:44:59.989301    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:45:06 addons-012219 kubelet[1625]: E0317 12:45:06.988804    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:45:15 addons-012219 kubelet[1625]: I0317 12:45:15.987860    1625 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Mar 17 12:45:17 addons-012219 kubelet[1625]: E0317 12:45:17.553573    1625 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Mar 17 12:45:17 addons-012219 kubelet[1625]: E0317 12:45:17.553650    1625 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Mar 17 12:45:17 addons-012219 kubelet[1625]: E0317 12:45:17.553808    1625 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hh4v9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(a2751ab5-cd1c-44a3-a6ba-dba98b254a96): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 12:45:17 addons-012219 kubelet[1625]: E0317 12:45:17.555059    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:45:20 addons-012219 kubelet[1625]: E0317 12:45:20.989532    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:45:27 addons-012219 kubelet[1625]: E0317 12:45:27.988746    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:45:32 addons-012219 kubelet[1625]: E0317 12:45:32.989128    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:45:38 addons-012219 kubelet[1625]: E0317 12:45:38.988707    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:45:46 addons-012219 kubelet[1625]: E0317 12:45:46.989305    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:45:51 addons-012219 kubelet[1625]: E0317 12:45:51.988505    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:46:00 addons-012219 kubelet[1625]: E0317 12:46:00.989280    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:46:03 addons-012219 kubelet[1625]: E0317 12:46:03.989214    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:46:15 addons-012219 kubelet[1625]: E0317 12:46:15.989248    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="a2751ab5-cd1c-44a3-a6ba-dba98b254a96"
	Mar 17 12:46:18 addons-012219 kubelet[1625]: E0317 12:46:18.946547    1625 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Mar 17 12:46:18 addons-012219 kubelet[1625]: E0317 12:46:18.946615    1625 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Mar 17 12:46:18 addons-012219 kubelet[1625]: E0317 12:46:18.946748    1625 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nft6x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(288fd0b6-8224-4d26-9aa9-20812cfdeca9): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 12:46:18 addons-012219 kubelet[1625]: E0317 12:46:18.947984    1625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="288fd0b6-8224-4d26-9aa9-20812cfdeca9"
	Mar 17 12:46:18 addons-012219 kubelet[1625]: I0317 12:46:18.988290    1625 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [7e31db05a70a8aa2cb0f75b72d342f7e671f1ea0e1be6634ce27012647e92af9] <==
	I0317 12:39:38.160831       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0317 12:39:38.264352       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0317 12:39:38.267238       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0317 12:39:38.360209       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0317 12:39:38.360490       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-012219_f580fdba-4164-417d-99e0-bbfdff8b9108!
	I0317 12:39:38.361811       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2e6be1f1-63bf-4a91-a4d7-e3d46b4cb84d", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-012219_f580fdba-4164-417d-99e0-bbfdff8b9108 became leader
	I0317 12:39:38.465181       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-012219_f580fdba-4164-417d-99e0-bbfdff8b9108!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012219 -n addons-012219
helpers_test.go:261: (dbg) Run:  kubectl --context addons-012219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-012219 describe pod nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-012219 describe pod nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7: exit status 1 (84.680021ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-012219/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 12:43:41 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:  10.244.0.34
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hh4v9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hh4v9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m46s                default-scheduler  Successfully assigned default/nginx to addons-012219
	  Normal   Pulling    73s (x4 over 2m45s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     70s (x4 over 2m42s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     70s (x4 over 2m42s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x9 over 2m41s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12s (x9 over 2m41s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-012219/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 12:43:25 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nft6x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-nft6x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/test-local-path to addons-012219
	  Warning  Failed     92s (x4 over 2m58s)  kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:7a5342b7662db8de99e045a2b47b889c5701b8dde0ce5ae3f1577bf57a15ed40: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    27s (x9 over 2m58s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     27s (x9 over 2m58s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    12s (x5 over 3m2s)   kubelet            Pulling image "busybox:stable"
	  Warning  Failed     9s (x5 over 2m58s)   kubelet            Error: ErrImagePull
	  Warning  Failed     9s                   kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:eddbff0886ee9d637bbcf8e30612db26cfebf9fbd5d53948c7c9090b24913b4b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-l6q8w" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t76b7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-012219 describe pod nginx test-local-path ingress-nginx-admission-create-l6q8w ingress-nginx-admission-patch-t76b7: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.18825343s)
--- FAIL: TestAddons/parallel/LocalPath (232.29s)

                                                
                                    
x
+
TestDockerEnvContainerd (40.9s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-806077 --driver=docker  --container-runtime=containerd
E0317 12:52:14.568554  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:14.575085  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:14.586537  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:14.608143  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:14.649696  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:14.731286  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:14.892887  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:15.214719  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:15.856901  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:17.138647  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:19.700536  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:52:24.821960  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-806077 --driver=docker  --container-runtime=containerd: (22.100879485s)
E0317 12:52:35.064254  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-806077"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nnvgkmHaARCO/agent.482416" SSH_AGENT_PID="482417" DOCKER_HOST=ssh://docker@127.0.0.1:33150 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nnvgkmHaARCO/agent.482416" SSH_AGENT_PID="482417" DOCKER_HOST=ssh://docker@127.0.0.1:33150 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nnvgkmHaARCO/agent.482416" SSH_AGENT_PID="482417" DOCKER_HOST=ssh://docker@127.0.0.1:33150 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (2.524728162s)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-nnvgkmHaARCO/agent.482416" SSH_AGENT_PID="482417" DOCKER_HOST=ssh://docker@127.0.0.1:33150 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:631: *** TestDockerEnvContainerd FAILED at 2025-03-17 12:52:48.129432859 +0000 UTC m=+872.381282477
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect dockerenv-806077
helpers_test.go:235: (dbg) docker inspect dockerenv-806077:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "582159a8670b2fb25982a05825aac4e8e564e6ec3ca060ed4690272e9d867611",
	        "Created": "2025-03-17T12:52:17.009983374Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479650,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T12:52:17.043758478Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/582159a8670b2fb25982a05825aac4e8e564e6ec3ca060ed4690272e9d867611/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/582159a8670b2fb25982a05825aac4e8e564e6ec3ca060ed4690272e9d867611/hostname",
	        "HostsPath": "/var/lib/docker/containers/582159a8670b2fb25982a05825aac4e8e564e6ec3ca060ed4690272e9d867611/hosts",
	        "LogPath": "/var/lib/docker/containers/582159a8670b2fb25982a05825aac4e8e564e6ec3ca060ed4690272e9d867611/582159a8670b2fb25982a05825aac4e8e564e6ec3ca060ed4690272e9d867611-json.log",
	        "Name": "/dockerenv-806077",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "dockerenv-806077:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-806077",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "582159a8670b2fb25982a05825aac4e8e564e6ec3ca060ed4690272e9d867611",
	                "LowerDir": "/var/lib/docker/overlay2/0cb956589a517ad0adde5a9e52d4ae03da6ac7c239e17067a90c9e5176cbe537-init/diff:/var/lib/docker/overlay2/0d1b72eeaeef000e911d7896b151fb0d0a984c18eeb180d19223ea8ba67fdac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0cb956589a517ad0adde5a9e52d4ae03da6ac7c239e17067a90c9e5176cbe537/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0cb956589a517ad0adde5a9e52d4ae03da6ac7c239e17067a90c9e5176cbe537/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0cb956589a517ad0adde5a9e52d4ae03da6ac7c239e17067a90c9e5176cbe537/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "dockerenv-806077",
	                "Source": "/var/lib/docker/volumes/dockerenv-806077/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-806077",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-806077",
	                "name.minikube.sigs.k8s.io": "dockerenv-806077",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0d8630a15ada2fcd56bfcb47b5cd2cefd3510d01d63805f538dafaaf631aaecf",
	            "SandboxKey": "/var/run/docker/netns/0d8630a15ada",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-806077": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:06:61:b6:bb:02",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee445efbdd5ad3723a6f3318d2e24ebab142c8d6158f8e63970811ade7b3f5c5",
	                    "EndpointID": "e7b0cd19f561da0526decc693a754c951138487d4fa2174de5c173b59ff63418",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-806077",
	                        "582159a8670b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-806077 -n dockerenv-806077
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-806077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p dockerenv-806077 logs -n 25: (1.05767319s)
helpers_test.go:252: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	|  Command   |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| start      | -p addons-012219 --wait=true         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:42 UTC |
	|            | --memory=4000 --alsologtostderr      |                  |         |         |                     |                     |
	|            | --addons=registry                    |                  |         |         |                     |                     |
	|            | --addons=metrics-server              |                  |         |         |                     |                     |
	|            | --addons=volumesnapshots             |                  |         |         |                     |                     |
	|            | --addons=csi-hostpath-driver         |                  |         |         |                     |                     |
	|            | --addons=gcp-auth                    |                  |         |         |                     |                     |
	|            | --addons=cloud-spanner               |                  |         |         |                     |                     |
	|            | --addons=inspektor-gadget            |                  |         |         |                     |                     |
	|            | --addons=nvidia-device-plugin        |                  |         |         |                     |                     |
	|            | --addons=yakd --addons=volcano       |                  |         |         |                     |                     |
	|            | --addons=amd-gpu-device-plugin       |                  |         |         |                     |                     |
	|            | --driver=docker                      |                  |         |         |                     |                     |
	|            | --container-runtime=containerd       |                  |         |         |                     |                     |
	|            | --addons=ingress                     |                  |         |         |                     |                     |
	|            | --addons=ingress-dns                 |                  |         |         |                     |                     |
	|            | --addons=storage-provisioner-rancher |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:42 UTC | 17 Mar 25 12:42 UTC |
	|            | volcano --alsologtostderr -v=1       |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | gcp-auth --alsologtostderr           |                  |         |         |                     |                     |
	|            | -v=1                                 |                  |         |         |                     |                     |
	| addons     | enable headlamp                      | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | -p addons-012219                     |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | amd-gpu-device-plugin                |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons                 | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | disable nvidia-device-plugin         |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | headlamp --alsologtostderr           |                  |         |         |                     |                     |
	|            | -v=1                                 |                  |         |         |                     |                     |
	| addons     | addons-012219 addons                 | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | disable metrics-server               |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| ip         | addons-012219 ip                     | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | registry --alsologtostderr           |                  |         |         |                     |                     |
	|            | -v=1                                 |                  |         |         |                     |                     |
	| addons     | addons-012219 addons                 | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | disable cloud-spanner                |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | yakd --alsologtostderr -v=1          |                  |         |         |                     |                     |
	| addons     | addons-012219 addons                 | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
	|            | disable inspektor-gadget             |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons                 | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|            | disable volumesnapshots              |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons                 | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
	|            | disable csi-hostpath-driver          |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:46 UTC | 17 Mar 25 12:47 UTC |
	|            | storage-provisioner-rancher          |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1               |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:51 UTC | 17 Mar 25 12:51 UTC |
	|            | ingress-dns --alsologtostderr        |                  |         |         |                     |                     |
	|            | -v=1                                 |                  |         |         |                     |                     |
	| addons     | addons-012219 addons disable         | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:51 UTC | 17 Mar 25 12:51 UTC |
	|            | ingress --alsologtostderr -v=1       |                  |         |         |                     |                     |
	| stop       | -p addons-012219                     | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:51 UTC | 17 Mar 25 12:52 UTC |
	| addons     | enable dashboard -p                  | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | addons-012219                        |                  |         |         |                     |                     |
	| addons     | disable dashboard -p                 | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | addons-012219                        |                  |         |         |                     |                     |
	| addons     | disable gvisor -p                    | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | addons-012219                        |                  |         |         |                     |                     |
	| delete     | -p addons-012219                     | addons-012219    | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	| start      | -p dockerenv-806077                  | dockerenv-806077 | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | --driver=docker                      |                  |         |         |                     |                     |
	|            | --container-runtime=containerd       |                  |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p              | dockerenv-806077 | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | dockerenv-806077                     |                  |         |         |                     |                     |
	|------------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:52:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:52:11.476211  479129 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:52:11.476533  479129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:52:11.476538  479129 out.go:358] Setting ErrFile to fd 2...
	I0317 12:52:11.476541  479129 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:52:11.476749  479129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 12:52:11.477366  479129 out.go:352] Setting JSON to false
	I0317 12:52:11.478400  479129 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9272,"bootTime":1742206660,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:52:11.478523  479129 start.go:139] virtualization: kvm guest
	I0317 12:52:11.480925  479129 out.go:177] * [dockerenv-806077] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:52:11.482447  479129 notify.go:220] Checking for updates...
	I0317 12:52:11.482491  479129 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 12:52:11.484119  479129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:52:11.485735  479129 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:52:11.487216  479129 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 12:52:11.488606  479129 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 12:52:11.489913  479129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:52:11.491529  479129 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:52:11.515798  479129 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 12:52:11.515903  479129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:52:11.570278  479129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-03-17 12:52:11.559066247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:52:11.570384  479129 docker.go:318] overlay module found
	I0317 12:52:11.572212  479129 out.go:177] * Using the docker driver based on user configuration
	I0317 12:52:11.573723  479129 start.go:297] selected driver: docker
	I0317 12:52:11.573751  479129 start.go:901] validating driver "docker" against <nil>
	I0317 12:52:11.573764  479129 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:52:11.573978  479129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:52:11.628821  479129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-03-17 12:52:11.618525006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:52:11.628993  479129 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:52:11.629504  479129 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0317 12:52:11.629640  479129 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 12:52:11.631475  479129 out.go:177] * Using Docker driver with root privileges
	I0317 12:52:11.633030  479129 cni.go:84] Creating CNI manager for ""
	I0317 12:52:11.633134  479129 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:52:11.633143  479129 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 12:52:11.633231  479129 start.go:340] cluster config:
	{Name:dockerenv-806077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:dockerenv-806077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:52:11.634700  479129 out.go:177] * Starting "dockerenv-806077" primary control-plane node in "dockerenv-806077" cluster
	I0317 12:52:11.635980  479129 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 12:52:11.637325  479129 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 12:52:11.638434  479129 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:52:11.638481  479129 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 12:52:11.638490  479129 cache.go:56] Caching tarball of preloaded images
	I0317 12:52:11.638545  479129 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 12:52:11.638608  479129 preload.go:172] Found /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 12:52:11.638617  479129 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 12:52:11.639095  479129 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/config.json ...
	I0317 12:52:11.639119  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/config.json: {Name:mkc0153a74be7910cd260d8cdadba7d1f17c2343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:11.660514  479129 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 12:52:11.660529  479129 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 12:52:11.660550  479129 cache.go:230] Successfully downloaded all kic artifacts
	I0317 12:52:11.660603  479129 start.go:360] acquireMachinesLock for dockerenv-806077: {Name:mkc1073c3c45d3432882dabab9a9e8791ec42de0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:52:11.660724  479129 start.go:364] duration metric: took 101.962µs to acquireMachinesLock for "dockerenv-806077"
	I0317 12:52:11.660750  479129 start.go:93] Provisioning new machine with config: &{Name:dockerenv-806077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:dockerenv-806077 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:52:11.660829  479129 start.go:125] createHost starting for "" (driver="docker")
	I0317 12:52:11.662939  479129 out.go:235] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0317 12:52:11.663285  479129 start.go:159] libmachine.API.Create for "dockerenv-806077" (driver="docker")
	I0317 12:52:11.663326  479129 client.go:168] LocalClient.Create starting
	I0317 12:52:11.663408  479129 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem
	I0317 12:52:11.663475  479129 main.go:141] libmachine: Decoding PEM data...
	I0317 12:52:11.663489  479129 main.go:141] libmachine: Parsing certificate...
	I0317 12:52:11.663565  479129 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem
	I0317 12:52:11.663589  479129 main.go:141] libmachine: Decoding PEM data...
	I0317 12:52:11.663602  479129 main.go:141] libmachine: Parsing certificate...
	I0317 12:52:11.664088  479129 cli_runner.go:164] Run: docker network inspect dockerenv-806077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 12:52:11.682407  479129 cli_runner.go:211] docker network inspect dockerenv-806077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 12:52:11.682510  479129 network_create.go:284] running [docker network inspect dockerenv-806077] to gather additional debugging logs...
	I0317 12:52:11.682529  479129 cli_runner.go:164] Run: docker network inspect dockerenv-806077
	W0317 12:52:11.700366  479129 cli_runner.go:211] docker network inspect dockerenv-806077 returned with exit code 1
	I0317 12:52:11.700387  479129 network_create.go:287] error running [docker network inspect dockerenv-806077]: docker network inspect dockerenv-806077: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-806077 not found
	I0317 12:52:11.700415  479129 network_create.go:289] output of [docker network inspect dockerenv-806077]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-806077 not found
	
	** /stderr **
	I0317 12:52:11.700549  479129 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:52:11.718325  479129 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b7f7c0}
	I0317 12:52:11.718378  479129 network_create.go:124] attempt to create docker network dockerenv-806077 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0317 12:52:11.718438  479129 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-806077 dockerenv-806077
	I0317 12:52:11.773067  479129 network_create.go:108] docker network dockerenv-806077 192.168.49.0/24 created
	I0317 12:52:11.773091  479129 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-806077" container
	I0317 12:52:11.773161  479129 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 12:52:11.791276  479129 cli_runner.go:164] Run: docker volume create dockerenv-806077 --label name.minikube.sigs.k8s.io=dockerenv-806077 --label created_by.minikube.sigs.k8s.io=true
	I0317 12:52:11.811402  479129 oci.go:103] Successfully created a docker volume dockerenv-806077
	I0317 12:52:11.811492  479129 cli_runner.go:164] Run: docker run --rm --name dockerenv-806077-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-806077 --entrypoint /usr/bin/test -v dockerenv-806077:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 12:52:12.298794  479129 oci.go:107] Successfully prepared a docker volume dockerenv-806077
	I0317 12:52:12.298852  479129 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:52:12.298875  479129 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 12:52:12.298961  479129 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-806077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 12:52:16.938546  479129 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-806077:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.639533301s)
	I0317 12:52:16.938596  479129 kic.go:203] duration metric: took 4.639715354s to extract preloaded images to volume ...
	W0317 12:52:16.939238  479129 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 12:52:16.939389  479129 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 12:52:16.991835  479129 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-806077 --name dockerenv-806077 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-806077 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-806077 --network dockerenv-806077 --ip 192.168.49.2 --volume dockerenv-806077:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 12:52:17.285754  479129 cli_runner.go:164] Run: docker container inspect dockerenv-806077 --format={{.State.Running}}
	I0317 12:52:17.307428  479129 cli_runner.go:164] Run: docker container inspect dockerenv-806077 --format={{.State.Status}}
	I0317 12:52:17.330198  479129 cli_runner.go:164] Run: docker exec dockerenv-806077 stat /var/lib/dpkg/alternatives/iptables
	I0317 12:52:17.375945  479129 oci.go:144] the created container "dockerenv-806077" has a running status.
	I0317 12:52:17.375972  479129 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa...
	I0317 12:52:17.610201  479129 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 12:52:17.637794  479129 cli_runner.go:164] Run: docker container inspect dockerenv-806077 --format={{.State.Status}}
	I0317 12:52:17.658069  479129 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 12:52:17.658116  479129 kic_runner.go:114] Args: [docker exec --privileged dockerenv-806077 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 12:52:17.757297  479129 cli_runner.go:164] Run: docker container inspect dockerenv-806077 --format={{.State.Status}}
	I0317 12:52:17.779361  479129 machine.go:93] provisionDockerMachine start ...
	I0317 12:52:17.779487  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:17.806609  479129 main.go:141] libmachine: Using SSH client type: native
	I0317 12:52:17.806857  479129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I0317 12:52:17.806863  479129 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 12:52:18.012110  479129 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-806077
	
	I0317 12:52:18.012134  479129 ubuntu.go:169] provisioning hostname "dockerenv-806077"
	I0317 12:52:18.012209  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:18.031469  479129 main.go:141] libmachine: Using SSH client type: native
	I0317 12:52:18.031727  479129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I0317 12:52:18.031736  479129 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-806077 && echo "dockerenv-806077" | sudo tee /etc/hostname
	I0317 12:52:18.180767  479129 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-806077
	
	I0317 12:52:18.180866  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:18.201871  479129 main.go:141] libmachine: Using SSH client type: native
	I0317 12:52:18.202098  479129 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33150 <nil> <nil>}
	I0317 12:52:18.202113  479129 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-806077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-806077/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-806077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:52:18.337406  479129 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:52:18.337444  479129 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20539-446828/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-446828/.minikube}
	I0317 12:52:18.337492  479129 ubuntu.go:177] setting up certificates
	I0317 12:52:18.337504  479129 provision.go:84] configureAuth start
	I0317 12:52:18.337571  479129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-806077
	I0317 12:52:18.355699  479129 provision.go:143] copyHostCerts
	I0317 12:52:18.355766  479129 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem, removing ...
	I0317 12:52:18.355776  479129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem
	I0317 12:52:18.355871  479129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem (1082 bytes)
	I0317 12:52:18.355962  479129 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem, removing ...
	I0317 12:52:18.355965  479129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem
	I0317 12:52:18.355989  479129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem (1123 bytes)
	I0317 12:52:18.356038  479129 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem, removing ...
	I0317 12:52:18.356041  479129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem
	I0317 12:52:18.356059  479129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem (1675 bytes)
	I0317 12:52:18.356124  479129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem org=jenkins.dockerenv-806077 san=[127.0.0.1 192.168.49.2 dockerenv-806077 localhost minikube]
	I0317 12:52:18.582323  479129 provision.go:177] copyRemoteCerts
	I0317 12:52:18.582381  479129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:52:18.582420  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:18.600986  479129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa Username:docker}
	I0317 12:52:18.697865  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:52:18.723393  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0317 12:52:18.748232  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 12:52:18.773073  479129 provision.go:87] duration metric: took 435.555883ms to configureAuth
	I0317 12:52:18.773096  479129 ubuntu.go:193] setting minikube options for container-runtime
	I0317 12:52:18.773294  479129 config.go:182] Loaded profile config "dockerenv-806077": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:52:18.773300  479129 machine.go:96] duration metric: took 993.925564ms to provisionDockerMachine
	I0317 12:52:18.773306  479129 client.go:171] duration metric: took 7.109975399s to LocalClient.Create
	I0317 12:52:18.773324  479129 start.go:167] duration metric: took 7.110044686s to libmachine.API.Create "dockerenv-806077"
	I0317 12:52:18.773330  479129 start.go:293] postStartSetup for "dockerenv-806077" (driver="docker")
	I0317 12:52:18.773341  479129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:52:18.773384  479129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:52:18.773421  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:18.792077  479129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa Username:docker}
	I0317 12:52:18.890561  479129 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:52:18.894376  479129 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 12:52:18.894401  479129 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 12:52:18.894408  479129 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 12:52:18.894413  479129 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 12:52:18.894425  479129 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/addons for local assets ...
	I0317 12:52:18.894488  479129 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/files for local assets ...
	I0317 12:52:18.894504  479129 start.go:296] duration metric: took 121.169371ms for postStartSetup
	I0317 12:52:18.894800  479129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-806077
	I0317 12:52:18.913824  479129 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/config.json ...
	I0317 12:52:18.914100  479129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:52:18.914168  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:18.933290  479129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa Username:docker}
	I0317 12:52:19.025403  479129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 12:52:19.030022  479129 start.go:128] duration metric: took 7.369170272s to createHost
	I0317 12:52:19.030044  479129 start.go:83] releasing machines lock for "dockerenv-806077", held for 7.369310017s
	I0317 12:52:19.030120  479129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-806077
	I0317 12:52:19.048549  479129 ssh_runner.go:195] Run: cat /version.json
	I0317 12:52:19.048589  479129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 12:52:19.048601  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:19.048647  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:19.070384  479129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa Username:docker}
	I0317 12:52:19.070613  479129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa Username:docker}
	I0317 12:52:19.236388  479129 ssh_runner.go:195] Run: systemctl --version
	I0317 12:52:19.241405  479129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 12:52:19.246139  479129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 12:52:19.271473  479129 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 12:52:19.271570  479129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 12:52:19.300515  479129 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 12:52:19.300531  479129 start.go:495] detecting cgroup driver to use...
	I0317 12:52:19.300567  479129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 12:52:19.300608  479129 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 12:52:19.313116  479129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:52:19.325619  479129 docker.go:217] disabling cri-docker service (if available) ...
	I0317 12:52:19.325674  479129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 12:52:19.340281  479129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 12:52:19.355010  479129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 12:52:19.432424  479129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 12:52:19.516449  479129 docker.go:233] disabling docker service ...
	I0317 12:52:19.516510  479129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 12:52:19.538530  479129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 12:52:19.550717  479129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 12:52:19.628023  479129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 12:52:19.707408  479129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 12:52:19.719142  479129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:52:19.735747  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 12:52:19.745859  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 12:52:19.756285  479129 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 12:52:19.756381  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 12:52:19.767191  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:52:19.777636  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 12:52:19.788378  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:52:19.799199  479129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:52:19.809277  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 12:52:19.819807  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 12:52:19.831011  479129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 12:52:19.842144  479129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:52:19.851563  479129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:52:19.861365  479129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:52:19.943294  479129 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 12:52:20.053670  479129 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 12:52:20.053773  479129 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 12:52:20.058082  479129 start.go:563] Will wait 60s for crictl version
	I0317 12:52:20.058139  479129 ssh_runner.go:195] Run: which crictl
	I0317 12:52:20.061724  479129 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 12:52:20.097429  479129 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 12:52:20.097503  479129 ssh_runner.go:195] Run: containerd --version
	I0317 12:52:20.123575  479129 ssh_runner.go:195] Run: containerd --version
	I0317 12:52:20.154425  479129 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 12:52:20.155959  479129 cli_runner.go:164] Run: docker network inspect dockerenv-806077 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:52:20.174663  479129 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0317 12:52:20.178849  479129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:52:20.190839  479129 kubeadm.go:883] updating cluster {Name:dockerenv-806077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:dockerenv-806077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 12:52:20.190938  479129 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:52:20.190991  479129 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:52:20.227542  479129 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:52:20.227563  479129 containerd.go:534] Images already preloaded, skipping extraction
	I0317 12:52:20.227625  479129 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:52:20.263922  479129 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:52:20.263941  479129 cache_images.go:84] Images are preloaded, skipping loading
	I0317 12:52:20.263951  479129 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 containerd true true} ...
	I0317 12:52:20.264065  479129 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-806077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:dockerenv-806077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 12:52:20.264126  479129 ssh_runner.go:195] Run: sudo crictl info
	I0317 12:52:20.299796  479129 cni.go:84] Creating CNI manager for ""
	I0317 12:52:20.299808  479129 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:52:20.299821  479129 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 12:52:20.299845  479129 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-806077 NodeName:dockerenv-806077 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 12:52:20.299961  479129 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-806077"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 12:52:20.300026  479129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 12:52:20.310101  479129 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 12:52:20.310161  479129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 12:52:20.320225  479129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0317 12:52:20.340600  479129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 12:52:20.361203  479129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2306 bytes)
	I0317 12:52:20.380329  479129 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0317 12:52:20.384202  479129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:52:20.396452  479129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:52:20.470915  479129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:52:20.484929  479129 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077 for IP: 192.168.49.2
	I0317 12:52:20.484954  479129 certs.go:194] generating shared ca certs ...
	I0317 12:52:20.484975  479129 certs.go:226] acquiring lock for ca certs: {Name:mk0dd75eca163be7a048e137f4b2d32cf3ae35d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:20.485181  479129 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key
	I0317 12:52:20.485231  479129 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key
	I0317 12:52:20.485239  479129 certs.go:256] generating profile certs ...
	I0317 12:52:20.485307  479129 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/client.key
	I0317 12:52:20.485330  479129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/client.crt with IP's: []
	I0317 12:52:20.596644  479129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/client.crt ...
	I0317 12:52:20.596664  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/client.crt: {Name:mk58b42e446ca60124f313f076c0984f739517c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:20.596886  479129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/client.key ...
	I0317 12:52:20.596894  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/client.key: {Name:mk8d5011b0a8e04fcacf06cbe7d8ee9300e595d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:20.596983  479129 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.key.7a8b893c
	I0317 12:52:20.596993  479129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.crt.7a8b893c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0317 12:52:20.802426  479129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.crt.7a8b893c ...
	I0317 12:52:20.802445  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.crt.7a8b893c: {Name:mk00ab09145d9741607f49e27a5f7e3a7d82b1dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:20.802635  479129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.key.7a8b893c ...
	I0317 12:52:20.802643  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.key.7a8b893c: {Name:mk89bc698ca2d6c698850594da08ada254a89c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:20.802714  479129 certs.go:381] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.crt.7a8b893c -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.crt
	I0317 12:52:20.802806  479129 certs.go:385] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.key.7a8b893c -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.key
	I0317 12:52:20.802860  479129 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.key
	I0317 12:52:20.802872  479129 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.crt with IP's: []
	I0317 12:52:21.287231  479129 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.crt ...
	I0317 12:52:21.287249  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.crt: {Name:mkb28c1cb7e7d474e016ac6c3be0465429150698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:21.287429  479129 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.key ...
	I0317 12:52:21.287437  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.key: {Name:mkf532f115c22cdd988dcb93bef0c7fcd6a8755e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:21.287628  479129 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 12:52:21.287662  479129 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem (1082 bytes)
	I0317 12:52:21.287681  479129 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem (1123 bytes)
	I0317 12:52:21.287703  479129 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem (1675 bytes)
	I0317 12:52:21.288281  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 12:52:21.313381  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 12:52:21.337741  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 12:52:21.362066  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 12:52:21.387061  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0317 12:52:21.410806  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0317 12:52:21.434155  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 12:52:21.458477  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/dockerenv-806077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 12:52:21.483424  479129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 12:52:21.508687  479129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 12:52:21.526907  479129 ssh_runner.go:195] Run: openssl version
	I0317 12:52:21.532561  479129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 12:52:21.541890  479129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:52:21.545419  479129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:39 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:52:21.545471  479129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:52:21.552372  479129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 12:52:21.562379  479129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 12:52:21.566655  479129 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:52:21.566699  479129 kubeadm.go:392] StartCluster: {Name:dockerenv-806077 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:dockerenv-806077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:52:21.566774  479129 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 12:52:21.566821  479129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 12:52:21.606653  479129 cri.go:89] found id: ""
	I0317 12:52:21.606706  479129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 12:52:21.616129  479129 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 12:52:21.625694  479129 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 12:52:21.625745  479129 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 12:52:21.634766  479129 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 12:52:21.634775  479129 kubeadm.go:157] found existing configuration files:
	
	I0317 12:52:21.634817  479129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0317 12:52:21.643718  479129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 12:52:21.643780  479129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 12:52:21.652457  479129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0317 12:52:21.661483  479129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 12:52:21.661538  479129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 12:52:21.670407  479129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0317 12:52:21.679767  479129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 12:52:21.679819  479129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 12:52:21.688867  479129 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0317 12:52:21.697666  479129 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 12:52:21.697714  479129 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 12:52:21.706534  479129 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 12:52:21.767992  479129 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 12:52:21.768228  479129 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 12:52:21.829724  479129 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:52:31.943322  479129 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 12:52:31.943410  479129 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 12:52:31.943500  479129 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 12:52:31.943554  479129 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 12:52:31.943606  479129 kubeadm.go:310] OS: Linux
	I0317 12:52:31.943646  479129 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 12:52:31.943683  479129 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 12:52:31.943721  479129 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 12:52:31.943757  479129 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 12:52:31.943801  479129 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 12:52:31.943839  479129 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 12:52:31.943880  479129 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 12:52:31.943921  479129 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 12:52:31.943955  479129 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 12:52:31.944025  479129 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 12:52:31.944105  479129 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 12:52:31.944181  479129 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 12:52:31.944251  479129 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 12:52:31.945801  479129 out.go:235]   - Generating certificates and keys ...
	I0317 12:52:31.945880  479129 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 12:52:31.945943  479129 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 12:52:31.946006  479129 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 12:52:31.946091  479129 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 12:52:31.946194  479129 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 12:52:31.946235  479129 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 12:52:31.946303  479129 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 12:52:31.946426  479129 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [dockerenv-806077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:52:31.946502  479129 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 12:52:31.946592  479129 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-806077 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:52:31.946644  479129 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 12:52:31.946695  479129 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 12:52:31.946728  479129 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 12:52:31.946781  479129 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 12:52:31.946825  479129 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 12:52:31.946882  479129 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 12:52:31.946957  479129 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 12:52:31.947033  479129 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 12:52:31.947110  479129 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 12:52:31.947226  479129 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 12:52:31.947311  479129 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 12:52:31.949569  479129 out.go:235]   - Booting up control plane ...
	I0317 12:52:31.949697  479129 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 12:52:31.949784  479129 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 12:52:31.949854  479129 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 12:52:31.949940  479129 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:52:31.950008  479129 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:52:31.950037  479129 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 12:52:31.950163  479129 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 12:52:31.950241  479129 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:52:31.950285  479129 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001237295s
	I0317 12:52:31.950366  479129 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 12:52:31.950445  479129 kubeadm.go:310] [api-check] The API server is healthy after 5.001413698s
	I0317 12:52:31.950559  479129 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 12:52:31.950657  479129 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 12:52:31.950701  479129 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 12:52:31.950857  479129 kubeadm.go:310] [mark-control-plane] Marking the node dockerenv-806077 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 12:52:31.950902  479129 kubeadm.go:310] [bootstrap-token] Using token: zowmyv.jhahm2q0sprkjr6o
	I0317 12:52:31.952160  479129 out.go:235]   - Configuring RBAC rules ...
	I0317 12:52:31.952287  479129 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 12:52:31.952389  479129 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 12:52:31.952543  479129 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 12:52:31.952685  479129 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 12:52:31.952807  479129 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 12:52:31.952903  479129 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 12:52:31.953053  479129 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 12:52:31.953127  479129 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 12:52:31.953164  479129 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 12:52:31.953166  479129 kubeadm.go:310] 
	I0317 12:52:31.953238  479129 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 12:52:31.953243  479129 kubeadm.go:310] 
	I0317 12:52:31.953349  479129 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 12:52:31.953354  479129 kubeadm.go:310] 
	I0317 12:52:31.953386  479129 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 12:52:31.953444  479129 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 12:52:31.953504  479129 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 12:52:31.953508  479129 kubeadm.go:310] 
	I0317 12:52:31.953580  479129 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 12:52:31.953584  479129 kubeadm.go:310] 
	I0317 12:52:31.953656  479129 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 12:52:31.953661  479129 kubeadm.go:310] 
	I0317 12:52:31.953739  479129 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 12:52:31.953840  479129 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 12:52:31.953932  479129 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 12:52:31.953937  479129 kubeadm.go:310] 
	I0317 12:52:31.954059  479129 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 12:52:31.954172  479129 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 12:52:31.954176  479129 kubeadm.go:310] 
	I0317 12:52:31.954303  479129 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token zowmyv.jhahm2q0sprkjr6o \
	I0317 12:52:31.954450  479129 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 \
	I0317 12:52:31.954479  479129 kubeadm.go:310] 	--control-plane 
	I0317 12:52:31.954483  479129 kubeadm.go:310] 
	I0317 12:52:31.954603  479129 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 12:52:31.954608  479129 kubeadm.go:310] 
	I0317 12:52:31.954747  479129 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token zowmyv.jhahm2q0sprkjr6o \
	I0317 12:52:31.954919  479129 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 
	I0317 12:52:31.954929  479129 cni.go:84] Creating CNI manager for ""
	I0317 12:52:31.954938  479129 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:52:31.956299  479129 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 12:52:31.957562  479129 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 12:52:31.961924  479129 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 12:52:31.961938  479129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 12:52:31.982465  479129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 12:52:32.197453  479129 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 12:52:32.197569  479129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:52:32.197576  479129 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-806077 minikube.k8s.io/updated_at=2025_03_17T12_52_32_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=dockerenv-806077 minikube.k8s.io/primary=true
	I0317 12:52:32.205458  479129 ops.go:34] apiserver oom_adj: -16
	I0317 12:52:32.286645  479129 kubeadm.go:1113] duration metric: took 89.175306ms to wait for elevateKubeSystemPrivileges
	I0317 12:52:32.293759  479129 kubeadm.go:394] duration metric: took 10.72705594s to StartCluster
	I0317 12:52:32.293794  479129 settings.go:142] acquiring lock: {Name:mk72030e2b6f80365da0b928b8b3c5c72d9da724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:32.293903  479129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:52:32.294771  479129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/kubeconfig: {Name:mk0cd04f754d83d5d928c90de569ec9144a7d4e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:52:32.295026  479129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 12:52:32.295023  479129 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:52:32.295113  479129 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 12:52:32.295218  479129 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-806077"
	I0317 12:52:32.295236  479129 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-806077"
	I0317 12:52:32.295228  479129 addons.go:69] Setting default-storageclass=true in profile "dockerenv-806077"
	I0317 12:52:32.295248  479129 config.go:182] Loaded profile config "dockerenv-806077": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:52:32.295254  479129 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-806077"
	I0317 12:52:32.295269  479129 host.go:66] Checking if "dockerenv-806077" exists ...
	I0317 12:52:32.295732  479129 cli_runner.go:164] Run: docker container inspect dockerenv-806077 --format={{.State.Status}}
	I0317 12:52:32.295839  479129 cli_runner.go:164] Run: docker container inspect dockerenv-806077 --format={{.State.Status}}
	I0317 12:52:32.296988  479129 out.go:177] * Verifying Kubernetes components...
	I0317 12:52:32.298406  479129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:52:32.317515  479129 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 12:52:32.317522  479129 addons.go:238] Setting addon default-storageclass=true in "dockerenv-806077"
	I0317 12:52:32.317555  479129 host.go:66] Checking if "dockerenv-806077" exists ...
	I0317 12:52:32.317905  479129 cli_runner.go:164] Run: docker container inspect dockerenv-806077 --format={{.State.Status}}
	I0317 12:52:32.318786  479129 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:52:32.318796  479129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 12:52:32.318854  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:32.337520  479129 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 12:52:32.337534  479129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 12:52:32.337592  479129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-806077
	I0317 12:52:32.337779  479129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa Username:docker}
	I0317 12:52:32.364069  479129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/dockerenv-806077/id_rsa Username:docker}
	I0317 12:52:32.549005  479129 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 12:52:32.563818  479129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:52:32.566633  479129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:52:32.566633  479129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 12:52:32.962756  479129 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0317 12:52:33.123554  479129 api_server.go:52] waiting for apiserver process to appear ...
	I0317 12:52:33.123597  479129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:52:33.133135  479129 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0317 12:52:33.134421  479129 addons.go:514] duration metric: took 839.29426ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0317 12:52:33.137635  479129 api_server.go:72] duration metric: took 842.580684ms to wait for apiserver process to appear ...
	I0317 12:52:33.137661  479129 api_server.go:88] waiting for apiserver healthz status ...
	I0317 12:52:33.137682  479129 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0317 12:52:33.143067  479129 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0317 12:52:33.144449  479129 api_server.go:141] control plane version: v1.32.2
	I0317 12:52:33.144474  479129 api_server.go:131] duration metric: took 6.806688ms to wait for apiserver health ...
	I0317 12:52:33.144505  479129 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 12:52:33.147996  479129 system_pods.go:59] 5 kube-system pods found
	I0317 12:52:33.148041  479129 system_pods.go:61] "etcd-dockerenv-806077" [a10246bd-5670-4ff1-bd8c-177acf731742] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0317 12:52:33.148049  479129 system_pods.go:61] "kube-apiserver-dockerenv-806077" [8f1ad64a-396b-42ff-b54e-ffd5246d2dee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0317 12:52:33.148056  479129 system_pods.go:61] "kube-controller-manager-dockerenv-806077" [fd3eb297-1fc2-4a70-8119-7fa596901595] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0317 12:52:33.148061  479129 system_pods.go:61] "kube-scheduler-dockerenv-806077" [a6252aa9-4f1d-45fe-be4e-c0b4d2b3077c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0317 12:52:33.148064  479129 system_pods.go:61] "storage-provisioner" [671f66e5-afd5-4c1d-b3ee-0a593879f8cc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0317 12:52:33.148077  479129 system_pods.go:74] duration metric: took 3.560523ms to wait for pod list to return data ...
	I0317 12:52:33.148094  479129 kubeadm.go:582] duration metric: took 853.04866ms to wait for: map[apiserver:true system_pods:true]
	I0317 12:52:33.148119  479129 node_conditions.go:102] verifying NodePressure condition ...
	I0317 12:52:33.150971  479129 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0317 12:52:33.150992  479129 node_conditions.go:123] node cpu capacity is 8
	I0317 12:52:33.151005  479129 node_conditions.go:105] duration metric: took 2.882434ms to run NodePressure ...
	I0317 12:52:33.151017  479129 start.go:241] waiting for startup goroutines ...
	I0317 12:52:33.466835  479129 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-806077" context rescaled to 1 replicas
	I0317 12:52:33.466869  479129 start.go:246] waiting for cluster config update ...
	I0317 12:52:33.466880  479129 start.go:255] writing updated cluster config ...
	I0317 12:52:33.467153  479129 ssh_runner.go:195] Run: rm -f paused
	I0317 12:52:33.519534  479129 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0317 12:52:33.521481  479129 out.go:177] * Done! kubectl is now configured to use "dockerenv-806077" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e8659d5dd4d30       f1332858868e1       12 seconds ago      Running             kube-proxy                0                   e0675abcc52c1       kube-proxy-qg2h9
	03d01bbff1ab1       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   d7ae391fcd238       storage-provisioner
	58a87620978f8       d8e673e7c9983       23 seconds ago      Running             kube-scheduler            0                   c9a81cccc181b       kube-scheduler-dockerenv-806077
	b3f28bda0f10f       b6a454c5a800d       23 seconds ago      Running             kube-controller-manager   0                   d343f21ea543e       kube-controller-manager-dockerenv-806077
	e7b947c473f76       a9e7e6b294baf       23 seconds ago      Running             etcd                      0                   73045850ac420       etcd-dockerenv-806077
	0e33231de1dc3       85b7a174738ba       23 seconds ago      Running             kube-apiserver            0                   39d98547623c7       kube-apiserver-dockerenv-806077
	
	
	==> containerd <==
	Mar 17 12:52:26 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:26.158514228Z" level=info msg="StartContainer for \"e7b947c473f76bc0363dbc6e251c83b4373232d5075c080072a2e026cb9fd011\" returns successfully"
	Mar 17 12:52:26 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:26.158652848Z" level=info msg="StartContainer for \"58a87620978f87c895969dc2e9084236e2c401b23fdab70152d241ff4946811b\" returns successfully"
	Mar 17 12:52:26 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:26.163448707Z" level=info msg="StartContainer for \"b3f28bda0f10f62c97692b38bbe2fbcc07a184f4434a4bcde6721f30227ab132\" returns successfully"
	Mar 17 12:52:35 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:35.787156880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:671f66e5-afd5-4c1d-b3ee-0a593879f8cc,Namespace:kube-system,Attempt:0,}"
	Mar 17 12:52:35 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:35.859841327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:671f66e5-afd5-4c1d-b3ee-0a593879f8cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7ae391fcd238084bb90677529a51594d58d3b181119339d0eadc9a2d30c8b6a\""
	Mar 17 12:52:35 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:35.863001691Z" level=info msg="CreateContainer within sandbox \"d7ae391fcd238084bb90677529a51594d58d3b181119339d0eadc9a2d30c8b6a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Mar 17 12:52:35 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:35.873336302Z" level=info msg="CreateContainer within sandbox \"d7ae391fcd238084bb90677529a51594d58d3b181119339d0eadc9a2d30c8b6a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"03d01bbff1ab1253a076778945821a2d445e33ec4ddc7b2bee8c6b8331e5f19b\""
	Mar 17 12:52:35 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:35.874187841Z" level=info msg="StartContainer for \"03d01bbff1ab1253a076778945821a2d445e33ec4ddc7b2bee8c6b8331e5f19b\""
	Mar 17 12:52:35 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:35.921060910Z" level=info msg="StartContainer for \"03d01bbff1ab1253a076778945821a2d445e33ec4ddc7b2bee8c6b8331e5f19b\" returns successfully"
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.187567087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-frxzl,Uid:62f10834-ccd4-40d9-a6ce-717646cff36b,Namespace:kube-system,Attempt:0,}"
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.192519297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qg2h9,Uid:a38ef977-5fdd-4a29-83e6-1ffc8e73b98c,Namespace:kube-system,Attempt:0,}"
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.252849348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qg2h9,Uid:a38ef977-5fdd-4a29-83e6-1ffc8e73b98c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0675abcc52c1c4ef12bb8be8772ab15e06a56cea02fc677a4dc12859e470cdb\""
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.255796950Z" level=info msg="CreateContainer within sandbox \"e0675abcc52c1c4ef12bb8be8772ab15e06a56cea02fc677a4dc12859e470cdb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.268192730Z" level=info msg="CreateContainer within sandbox \"e0675abcc52c1c4ef12bb8be8772ab15e06a56cea02fc677a4dc12859e470cdb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8659d5dd4d30dc033ea82082d2ff94999a5be8a672e6a7ac1ad2c1cf9af29ec\""
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.269018638Z" level=info msg="StartContainer for \"e8659d5dd4d30dc033ea82082d2ff94999a5be8a672e6a7ac1ad2c1cf9af29ec\""
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.329874334Z" level=info msg="StartContainer for \"e8659d5dd4d30dc033ea82082d2ff94999a5be8a672e6a7ac1ad2c1cf9af29ec\" returns successfully"
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.337193874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tbwj5,Uid:bce7868d-fba1-47b4-8e2e-476668799720,Namespace:kube-system,Attempt:0,}"
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.360683095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tbwj5,Uid:bce7868d-fba1-47b4-8e2e-476668799720,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\": failed to find network info for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\""
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.465165050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-frxzl,Uid:62f10834-ccd4-40d9-a6ce-717646cff36b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a019906ef1d691ceb42d48177b8d91297ebb84518c12c49c0f84abdbe1477ee4\""
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.467256094Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20250214-acbabc1a\""
	Mar 17 12:52:36 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:36.468926080Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:52:37 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:37.183065527Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 12:52:39 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:39.436027749Z" level=error msg="PullImage \"docker.io/kindest/kindnetd:v20250214-acbabc1a\" failed" error="failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 12:52:39 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:39.436109738Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20250214-acbabc1a: active requests=0, bytes read=11667"
	Mar 17 12:52:41 dockerenv-806077 containerd[867]: time="2025-03-17T12:52:41.641378012Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               dockerenv-806077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-806077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=dockerenv-806077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T12_52_32_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:52:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-806077
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 12:52:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 12:52:41 +0000   Mon, 17 Mar 2025 12:52:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 12:52:41 +0000   Mon, 17 Mar 2025 12:52:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 12:52:41 +0000   Mon, 17 Mar 2025 12:52:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 12:52:41 +0000   Mon, 17 Mar 2025 12:52:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-806077
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 62d5c61b9c6e430f96f10da55773022f
	  System UUID:                33668084-d5bf-4f46-a7e1-8a7bb9d762c9
	  Boot ID:                    40219139-515e-4d1c-86e4-bab1900bd49a
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-tbwj5                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13s
	  kube-system                 etcd-dockerenv-806077                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         18s
	  kube-system                 kindnet-frxzl                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14s
	  kube-system                 kube-apiserver-dockerenv-806077             250m (3%)     0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-controller-manager-dockerenv-806077    200m (2%)     0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-proxy-qg2h9                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 kube-scheduler-dockerenv-806077             100m (1%)     0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12s                kube-proxy       
	  Normal   NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node dockerenv-806077 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 24s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node dockerenv-806077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node dockerenv-806077 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 24s                kubelet          Starting kubelet.
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  18s                kubelet          Node dockerenv-806077 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s                kubelet          Node dockerenv-806077 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s                kubelet          Node dockerenv-806077 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15s                node-controller  Node dockerenv-806077 event: Registered Node dockerenv-806077 in Controller
	  Normal   CIDRAssignmentFailed     15s                cidrAllocator    Node dockerenv-806077 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +2.171804] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000008] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000005] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000004] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.047810] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000009] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000001] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000011] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000008] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[Mar17 12:32] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000007] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.043860] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000003] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	
	
	==> etcd [e7b947c473f76bc0363dbc6e251c83b4373232d5075c080072a2e026cb9fd011] <==
	{"level":"info","ts":"2025-03-17T12:52:26.257018Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T12:52:26.257095Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T12:52:26.257214Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T12:52:26.257405Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-17T12:52:26.257462Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T12:52:26.985225Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-03-17T12:52:26.985292Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-03-17T12:52:26.985309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-03-17T12:52:26.985357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-03-17T12:52:26.985365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-03-17T12:52:26.985373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-03-17T12:52:26.985380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-03-17T12:52:26.986674Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-806077 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T12:52:26.986736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T12:52:26.986809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T12:52:26.986946Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:52:26.987058Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T12:52:26.987090Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T12:52:26.987716Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:52:26.987736Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:52:26.987783Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:52:26.987914Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:52:26.987949Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:52:26.988663Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T12:52:26.988736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 12:52:49 up  2:35,  0 users,  load average: 0.67, 0.58, 1.41
	Linux dockerenv-806077 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [0e33231de1dc3038f5190c0bbfa6559bfa99fc775d4b782f262e3660ef933b63] <==
	I0317 12:52:28.505547       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 12:52:28.505561       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 12:52:28.517748       1 shared_informer.go:320] Caches are synced for configmaps
	I0317 12:52:28.517869       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 12:52:28.548255       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 12:52:28.548307       1 policy_source.go:240] refreshing policies
	E0317 12:52:28.571963       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0317 12:52:28.595430       1 controller.go:615] quota admission added evaluator for: namespaces
	I0317 12:52:28.709145       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 12:52:29.298548       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0317 12:52:29.304622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0317 12:52:29.304643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 12:52:29.951518       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 12:52:29.999661       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 12:52:30.055569       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0317 12:52:30.062694       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0317 12:52:30.064037       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 12:52:30.070953       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 12:52:30.367443       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 12:52:31.361730       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 12:52:31.372946       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 12:52:31.382732       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 12:52:35.269054       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0317 12:52:35.269054       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0317 12:52:35.869947       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b3f28bda0f10f62c97692b38bbe2fbcc07a184f4434a4bcde6721f30227ab132] <==
	E0317 12:52:34.897165       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"dockerenv-806077\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="dockerenv-806077" podCIDRs=["10.244.1.0/24"]
	E0317 12:52:34.897230       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"dockerenv-806077\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="dockerenv-806077"
	E0317 12:52:34.897319       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'dockerenv-806077': failed to patch node CIDR: Node \"dockerenv-806077\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.1.0/24\", \"10.244.0.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0317 12:52:34.897378       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-806077"
	I0317 12:52:34.902792       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-806077"
	I0317 12:52:34.905933       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0317 12:52:34.913401       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0317 12:52:34.915954       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0317 12:52:34.917167       1 shared_informer.go:320] Caches are synced for stateful set
	I0317 12:52:34.917191       1 shared_informer.go:320] Caches are synced for attach detach
	I0317 12:52:34.917219       1 shared_informer.go:320] Caches are synced for PVC protection
	I0317 12:52:34.917244       1 shared_informer.go:320] Caches are synced for PV protection
	I0317 12:52:34.917268       1 shared_informer.go:320] Caches are synced for deployment
	I0317 12:52:34.917287       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0317 12:52:34.918527       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0317 12:52:34.924188       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 12:52:34.934333       1 shared_informer.go:320] Caches are synced for namespace
	I0317 12:52:34.937689       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 12:52:34.942072       1 shared_informer.go:320] Caches are synced for service account
	I0317 12:52:34.973120       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-806077"
	I0317 12:52:36.029215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="155.336467ms"
	I0317 12:52:36.035870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.599456ms"
	I0317 12:52:36.035962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.943µs"
	I0317 12:52:36.039559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="65.094µs"
	I0317 12:52:41.650477       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-806077"
	
	
	==> kube-proxy [e8659d5dd4d30dc033ea82082d2ff94999a5be8a672e6a7ac1ad2c1cf9af29ec] <==
	I0317 12:52:36.370234       1 server_linux.go:66] "Using iptables proxy"
	I0317 12:52:36.529694       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0317 12:52:36.529804       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 12:52:36.551738       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 12:52:36.551808       1 server_linux.go:170] "Using iptables Proxier"
	I0317 12:52:36.554032       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 12:52:36.554622       1 server.go:497] "Version info" version="v1.32.2"
	I0317 12:52:36.554657       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 12:52:36.556206       1 config.go:199] "Starting service config controller"
	I0317 12:52:36.556224       1 config.go:105] "Starting endpoint slice config controller"
	I0317 12:52:36.556261       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 12:52:36.556261       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 12:52:36.556288       1 config.go:329] "Starting node config controller"
	I0317 12:52:36.556339       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 12:52:36.657574       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 12:52:36.657615       1 shared_informer.go:320] Caches are synced for service config
	I0317 12:52:36.657693       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [58a87620978f87c895969dc2e9084236e2c401b23fdab70152d241ff4946811b] <==
	W0317 12:52:28.463567       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 12:52:28.463587       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.267735       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0317 12:52:29.267797       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.299590       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 12:52:29.299643       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.309471       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0317 12:52:29.309521       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.400509       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 12:52:29.400553       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.465830       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 12:52:29.465891       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.645928       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0317 12:52:29.645990       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.667666       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:52:29.667708       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.686352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 12:52:29.686410       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.724720       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 12:52:29.724772       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.724837       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0317 12:52:29.724851       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:52:29.913841       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 12:52:29.913895       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0317 12:52:31.860683       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: I0317 12:52:35.370536    1614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a38ef977-5fdd-4a29-83e6-1ffc8e73b98c-kube-proxy\") pod \"kube-proxy-qg2h9\" (UID: \"a38ef977-5fdd-4a29-83e6-1ffc8e73b98c\") " pod="kube-system/kube-proxy-qg2h9"
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: I0317 12:52:35.370551    1614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzl5k\" (UniqueName: \"kubernetes.io/projected/a38ef977-5fdd-4a29-83e6-1ffc8e73b98c-kube-api-access-pzl5k\") pod \"kube-proxy-qg2h9\" (UID: \"a38ef977-5fdd-4a29-83e6-1ffc8e73b98c\") " pod="kube-system/kube-proxy-qg2h9"
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: I0317 12:52:35.370646    1614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62f10834-ccd4-40d9-a6ce-717646cff36b-xtables-lock\") pod \"kindnet-frxzl\" (UID: \"62f10834-ccd4-40d9-a6ce-717646cff36b\") " pod="kube-system/kindnet-frxzl"
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: E0317 12:52:35.477120    1614 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: E0317 12:52:35.477178    1614 projected.go:194] Error preparing data for projected volume kube-api-access-2k56v for pod kube-system/kindnet-frxzl: configmap "kube-root-ca.crt" not found
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: E0317 12:52:35.477253    1614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f10834-ccd4-40d9-a6ce-717646cff36b-kube-api-access-2k56v podName:62f10834-ccd4-40d9-a6ce-717646cff36b nodeName:}" failed. No retries permitted until 2025-03-17 12:52:35.977229793 +0000 UTC m=+4.853542010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2k56v" (UniqueName: "kubernetes.io/projected/62f10834-ccd4-40d9-a6ce-717646cff36b-kube-api-access-2k56v") pod "kindnet-frxzl" (UID: "62f10834-ccd4-40d9-a6ce-717646cff36b") : configmap "kube-root-ca.crt" not found
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: E0317 12:52:35.477120    1614 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: E0317 12:52:35.477273    1614 projected.go:194] Error preparing data for projected volume kube-api-access-pzl5k for pod kube-system/kube-proxy-qg2h9: configmap "kube-root-ca.crt" not found
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: E0317 12:52:35.477301    1614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a38ef977-5fdd-4a29-83e6-1ffc8e73b98c-kube-api-access-pzl5k podName:a38ef977-5fdd-4a29-83e6-1ffc8e73b98c nodeName:}" failed. No retries permitted until 2025-03-17 12:52:35.97729166 +0000 UTC m=+4.853603875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pzl5k" (UniqueName: "kubernetes.io/projected/a38ef977-5fdd-4a29-83e6-1ffc8e73b98c-kube-api-access-pzl5k") pod "kube-proxy-qg2h9" (UID: "a38ef977-5fdd-4a29-83e6-1ffc8e73b98c") : configmap "kube-root-ca.crt" not found
	Mar 17 12:52:35 dockerenv-806077 kubelet[1614]: I0317 12:52:35.673504    1614 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Mar 17 12:52:36 dockerenv-806077 kubelet[1614]: I0317 12:52:36.077042    1614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkkj\" (UniqueName: \"kubernetes.io/projected/bce7868d-fba1-47b4-8e2e-476668799720-kube-api-access-kqkkj\") pod \"coredns-668d6bf9bc-tbwj5\" (UID: \"bce7868d-fba1-47b4-8e2e-476668799720\") " pod="kube-system/coredns-668d6bf9bc-tbwj5"
	Mar 17 12:52:36 dockerenv-806077 kubelet[1614]: I0317 12:52:36.077096    1614 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce7868d-fba1-47b4-8e2e-476668799720-config-volume\") pod \"coredns-668d6bf9bc-tbwj5\" (UID: \"bce7868d-fba1-47b4-8e2e-476668799720\") " pod="kube-system/coredns-668d6bf9bc-tbwj5"
	Mar 17 12:52:36 dockerenv-806077 kubelet[1614]: I0317 12:52:36.295167    1614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=3.295143383 podStartE2EDuration="3.295143383s" podCreationTimestamp="2025-03-17 12:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 12:52:36.294725613 +0000 UTC m=+5.171037836" watchObservedRunningTime="2025-03-17 12:52:36.295143383 +0000 UTC m=+5.171455604"
	Mar 17 12:52:36 dockerenv-806077 kubelet[1614]: E0317 12:52:36.361003    1614 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\": failed to find network info for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\""
	Mar 17 12:52:36 dockerenv-806077 kubelet[1614]: E0317 12:52:36.361148    1614 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\": failed to find network info for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\"" pod="kube-system/coredns-668d6bf9bc-tbwj5"
	Mar 17 12:52:36 dockerenv-806077 kubelet[1614]: E0317 12:52:36.361179    1614 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\": failed to find network info for sandbox \"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\"" pod="kube-system/coredns-668d6bf9bc-tbwj5"
	Mar 17 12:52:36 dockerenv-806077 kubelet[1614]: E0317 12:52:36.361238    1614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tbwj5_kube-system(bce7868d-fba1-47b4-8e2e-476668799720)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tbwj5_kube-system(bce7868d-fba1-47b4-8e2e-476668799720)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\\\": failed to find network info for sandbox \\\"618f7841dee990bba82ad4891e9be666e2fbec21b001ea4192d08257c73220ee\\\"\"" pod="kube-system/coredns-668d6bf9bc-tbwj5" podUID="bce7868d-fba1-47b4-8e2e-476668799720"
	Mar 17 12:52:37 dockerenv-806077 kubelet[1614]: I0317 12:52:37.298746    1614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qg2h9" podStartSLOduration=2.298725932 podStartE2EDuration="2.298725932s" podCreationTimestamp="2025-03-17 12:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 12:52:37.298481534 +0000 UTC m=+6.174793758" watchObservedRunningTime="2025-03-17 12:52:37.298725932 +0000 UTC m=+6.175038165"
	Mar 17 12:52:39 dockerenv-806077 kubelet[1614]: E0317 12:52:39.436439    1614 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Mar 17 12:52:39 dockerenv-806077 kubelet[1614]: E0317 12:52:39.436539    1614 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kindest/kindnetd:v20250214-acbabc1a"
	Mar 17 12:52:39 dockerenv-806077 kubelet[1614]: E0317 12:52:39.436764    1614 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:kindnet-cni,Image:docker.io/kindest/kindnetd:v20250214-acbabc1a,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:HOST_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.hostIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_SUBNET,Value:10.244.0.0/16,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{52428800 0} {<nil>} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{Vol
umeMount{Name:cni-cfg,ReadOnly:false,MountPath:/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2k56v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_RAW NET_ADMIN],Drop:[],},Privileged:*false,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,S
tdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kindnet-frxzl_kube-system(62f10834-ccd4-40d9-a6ce-717646cff36b): ErrImagePull: failed to pull and unpack image \"docker.io/kindest/kindnetd:v20250214-acbabc1a\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 12:52:39 dockerenv-806077 kubelet[1614]: E0317 12:52:39.438024    1614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-frxzl" podUID="62f10834-ccd4-40d9-a6ce-717646cff36b"
	Mar 17 12:52:40 dockerenv-806077 kubelet[1614]: E0317 12:52:40.295561    1614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-frxzl" podUID="62f10834-ccd4-40d9-a6ce-717646cff36b"
	Mar 17 12:52:41 dockerenv-806077 kubelet[1614]: I0317 12:52:41.640710    1614 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 17 12:52:41 dockerenv-806077 kubelet[1614]: I0317 12:52:41.641733    1614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	
	==> storage-provisioner [03d01bbff1ab1253a076778945821a2d445e33ec4ddc7b2bee8c6b8331e5f19b] <==
	I0317 12:52:35.956593       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-806077 -n dockerenv-806077
helpers_test.go:261: (dbg) Run:  kubectl --context dockerenv-806077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-tbwj5 kindnet-frxzl
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context dockerenv-806077 describe pod coredns-668d6bf9bc-tbwj5 kindnet-frxzl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-806077 describe pod coredns-668d6bf9bc-tbwj5 kindnet-frxzl: exit status 1 (68.141711ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-tbwj5" not found
	Error from server (NotFound): pods "kindnet-frxzl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-806077 describe pod coredns-668d6bf9bc-tbwj5 kindnet-frxzl: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-806077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-806077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-806077: (2.3251563s)
--- FAIL: TestDockerEnvContainerd (40.90s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (610.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207072 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0317 12:53:36.507536  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:54:58.429167  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:14.577209  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 12:57:42.279383  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:02:14.576624  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-207072 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: exit status 80 (10m8.960929795s)

                                                
                                                
-- stdout --
	* [functional-207072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-207072" primary control-plane node in "functional-207072" cluster
	* Pulling base image v0.0.46-1741860993-20523 ...
	* Creating docker container (CPUs=2, Memory=4000MB) ...
	* Found network options:
	  - HTTP_PROXY=localhost:37495
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:37495 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	E0317 12:53:54.985557  488746 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-9ch8s" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-9ch8s" not found
	E0317 12:57:54.991733  488746 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2253: failed minikube start. args "out/minikube-linux-amd64 start -p functional-207072 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-207072
helpers_test.go:235: (dbg) docker inspect functional-207072:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd",
	        "Created": "2025-03-17T12:53:33.435306722Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T12:53:33.472272287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/hostname",
	        "HostsPath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/hosts",
	        "LogPath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd-json.log",
	        "Name": "/functional-207072",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-207072:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-207072",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd",
	                "LowerDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38-init/diff:/var/lib/docker/overlay2/0d1b72eeaeef000e911d7896b151fb0d0a984c18eeb180d19223ea8ba67fdac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-207072",
	                "Source": "/var/lib/docker/volumes/functional-207072/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-207072",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-207072",
	                "name.minikube.sigs.k8s.io": "functional-207072",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bb7b68ac4db848252444f20903ebb70f2bef8eac96fb998aca641befa7612a8",
	            "SandboxKey": "/var/run/docker/netns/3bb7b68ac4db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-207072": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:8e:4c:06:9e:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56d5d739092975ce17103d292d843574d96362dda269224b5acf5c20e29ff743",
	                    "EndpointID": "d8b88d8d0483f3ebc26d00515fb43351ca250c91b1d79d3fa56d6b89016e4a3b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-207072",
	                        "99a40b331229"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-207072 -n functional-207072
helpers_test.go:244: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 logs -n 25
helpers_test.go:252: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons     | addons-012219 addons disable   | addons-012219     | jenkins | v1.35.0 | 17 Mar 25 12:51 UTC | 17 Mar 25 12:51 UTC |
	|            | ingress-dns --alsologtostderr  |                   |         |         |                     |                     |
	|            | -v=1                           |                   |         |         |                     |                     |
	| addons     | addons-012219 addons disable   | addons-012219     | jenkins | v1.35.0 | 17 Mar 25 12:51 UTC | 17 Mar 25 12:51 UTC |
	|            | ingress --alsologtostderr -v=1 |                   |         |         |                     |                     |
	| stop       | -p addons-012219               | addons-012219     | jenkins | v1.35.0 | 17 Mar 25 12:51 UTC | 17 Mar 25 12:52 UTC |
	| addons     | enable dashboard -p            | addons-012219     | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | addons-012219                  |                   |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-012219     | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | addons-012219                  |                   |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-012219     | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | addons-012219                  |                   |         |         |                     |                     |
	| delete     | -p addons-012219               | addons-012219     | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	| start      | -p dockerenv-806077            | dockerenv-806077  | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | --driver=docker                |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-806077  | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	|            | dockerenv-806077               |                   |         |         |                     |                     |
	| delete     | -p dockerenv-806077            | dockerenv-806077  | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:52 UTC |
	| start      | -p nospam-886205 -n=1          | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:52 UTC | 17 Mar 25 12:53 UTC |
	|            | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|            | --log_dir=/tmp/nospam-886205   |                   |         |         |                     |                     |
	|            | --driver=docker                |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| start      | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC |                     |
	|            | /tmp/nospam-886205 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| start      | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC |                     |
	|            | /tmp/nospam-886205 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| start      | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC |                     |
	|            | /tmp/nospam-886205 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| pause      | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 pause       |                   |         |         |                     |                     |
	| pause      | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 pause       |                   |         |         |                     |                     |
	| pause      | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 pause       |                   |         |         |                     |                     |
	| unpause    | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 unpause     |                   |         |         |                     |                     |
	| unpause    | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 unpause     |                   |         |         |                     |                     |
	| unpause    | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 unpause     |                   |         |         |                     |                     |
	| stop       | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 stop        |                   |         |         |                     |                     |
	| stop       | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 stop        |                   |         |         |                     |                     |
	| stop       | nospam-886205 --log_dir        | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	|            | /tmp/nospam-886205 stop        |                   |         |         |                     |                     |
	| delete     | -p nospam-886205               | nospam-886205     | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC | 17 Mar 25 12:53 UTC |
	| start      | -p functional-207072           | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 12:53 UTC |                     |
	|            | --memory=4000                  |                   |         |         |                     |                     |
	|            | --apiserver-port=8441          |                   |         |         |                     |                     |
	|            | --wait=all --driver=docker     |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:53:27
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:53:27.945451  488746 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:53:27.945764  488746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:53:27.945769  488746 out.go:358] Setting ErrFile to fd 2...
	I0317 12:53:27.945772  488746 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:53:27.946006  488746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 12:53:27.946632  488746 out.go:352] Setting JSON to false
	I0317 12:53:27.947752  488746 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9348,"bootTime":1742206660,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:53:27.947811  488746 start.go:139] virtualization: kvm guest
	I0317 12:53:27.950262  488746 out.go:177] * [functional-207072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:53:27.952200  488746 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 12:53:27.952256  488746 notify.go:220] Checking for updates...
	I0317 12:53:27.955297  488746 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:53:27.956783  488746 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:53:27.958361  488746 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 12:53:27.959873  488746 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 12:53:27.961504  488746 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 12:53:27.963452  488746 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:53:27.988502  488746 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 12:53:27.988648  488746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:53:28.042952  488746 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-03-17 12:53:28.032483039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:53:28.043052  488746 docker.go:318] overlay module found
	I0317 12:53:28.045215  488746 out.go:177] * Using the docker driver based on user configuration
	I0317 12:53:28.047007  488746 start.go:297] selected driver: docker
	I0317 12:53:28.047032  488746 start.go:901] validating driver "docker" against <nil>
	I0317 12:53:28.047044  488746 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 12:53:28.047858  488746 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:53:28.099130  488746 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:43 SystemTime:2025-03-17 12:53:28.090034525 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:53:28.099281  488746 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:53:28.099497  488746 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0317 12:53:28.101535  488746 out.go:177] * Using Docker driver with root privileges
	I0317 12:53:28.102909  488746 cni.go:84] Creating CNI manager for ""
	I0317 12:53:28.102995  488746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:53:28.103007  488746 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 12:53:28.103098  488746 start.go:340] cluster config:
	{Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:53:28.104425  488746 out.go:177] * Starting "functional-207072" primary control-plane node in "functional-207072" cluster
	I0317 12:53:28.105511  488746 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 12:53:28.106669  488746 out.go:177] * Pulling base image v0.0.46-1741860993-20523 ...
	I0317 12:53:28.107630  488746 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:53:28.107675  488746 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 12:53:28.107682  488746 cache.go:56] Caching tarball of preloaded images
	I0317 12:53:28.107728  488746 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 12:53:28.107788  488746 preload.go:172] Found /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0317 12:53:28.107796  488746 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
	I0317 12:53:28.108240  488746 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/config.json ...
	I0317 12:53:28.108264  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/config.json: {Name:mk225d83a6925a049a14c20bf82eb14315c6a71e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:28.129641  488746 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon, skipping pull
	I0317 12:53:28.129653  488746 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in daemon, skipping load
	I0317 12:53:28.129670  488746 cache.go:230] Successfully downloaded all kic artifacts
	I0317 12:53:28.129702  488746 start.go:360] acquireMachinesLock for functional-207072: {Name:mkd803b355aff30dce30e0ca8777925cd8ae876a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0317 12:53:28.129804  488746 start.go:364] duration metric: took 87.193µs to acquireMachinesLock for "functional-207072"
	I0317 12:53:28.129823  488746 start.go:93] Provisioning new machine with config: &{Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:53:28.129886  488746 start.go:125] createHost starting for "" (driver="docker")
	I0317 12:53:28.132057  488746 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	W0317 12:53:28.132370  488746 out.go:270] ! Local proxy ignored: not passing HTTP_PROXY=localhost:37495 to docker env.
	I0317 12:53:28.132393  488746 start.go:159] libmachine.API.Create for "functional-207072" (driver="docker")
	I0317 12:53:28.132415  488746 client.go:168] LocalClient.Create starting
	I0317 12:53:28.132503  488746 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem
	I0317 12:53:28.132539  488746 main.go:141] libmachine: Decoding PEM data...
	I0317 12:53:28.132550  488746 main.go:141] libmachine: Parsing certificate...
	I0317 12:53:28.132605  488746 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem
	I0317 12:53:28.132619  488746 main.go:141] libmachine: Decoding PEM data...
	I0317 12:53:28.132626  488746 main.go:141] libmachine: Parsing certificate...
	I0317 12:53:28.132972  488746 cli_runner.go:164] Run: docker network inspect functional-207072 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0317 12:53:28.150663  488746 cli_runner.go:211] docker network inspect functional-207072 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0317 12:53:28.150736  488746 network_create.go:284] running [docker network inspect functional-207072] to gather additional debugging logs...
	I0317 12:53:28.150752  488746 cli_runner.go:164] Run: docker network inspect functional-207072
	W0317 12:53:28.169200  488746 cli_runner.go:211] docker network inspect functional-207072 returned with exit code 1
	I0317 12:53:28.169242  488746 network_create.go:287] error running [docker network inspect functional-207072]: docker network inspect functional-207072: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-207072 not found
	I0317 12:53:28.169261  488746 network_create.go:289] output of [docker network inspect functional-207072]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-207072 not found
	
	** /stderr **
	I0317 12:53:28.169361  488746 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:53:28.187684  488746 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000307280}
	I0317 12:53:28.187728  488746 network_create.go:124] attempt to create docker network functional-207072 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0317 12:53:28.187781  488746 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-207072 functional-207072
	I0317 12:53:28.239626  488746 network_create.go:108] docker network functional-207072 192.168.49.0/24 created
	I0317 12:53:28.239656  488746 kic.go:121] calculated static IP "192.168.49.2" for the "functional-207072" container
	I0317 12:53:28.239747  488746 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0317 12:53:28.258305  488746 cli_runner.go:164] Run: docker volume create functional-207072 --label name.minikube.sigs.k8s.io=functional-207072 --label created_by.minikube.sigs.k8s.io=true
	I0317 12:53:28.277990  488746 oci.go:103] Successfully created a docker volume functional-207072
	I0317 12:53:28.278077  488746 cli_runner.go:164] Run: docker run --rm --name functional-207072-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-207072 --entrypoint /usr/bin/test -v functional-207072:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -d /var/lib
	I0317 12:53:28.715885  488746 oci.go:107] Successfully prepared a docker volume functional-207072
	I0317 12:53:28.715957  488746 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:53:28.715980  488746 kic.go:194] Starting extracting preloaded images to volume ...
	I0317 12:53:28.716054  488746 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-207072:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir
	I0317 12:53:33.369691  488746 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-207072:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 -I lz4 -xf /preloaded.tar -C /extractDir: (4.653569035s)
	I0317 12:53:33.369728  488746 kic.go:203] duration metric: took 4.653743278s to extract preloaded images to volume ...
	W0317 12:53:33.369920  488746 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0317 12:53:33.370015  488746 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0317 12:53:33.418618  488746 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-207072 --name functional-207072 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-207072 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-207072 --network functional-207072 --ip 192.168.49.2 --volume functional-207072:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185
	I0317 12:53:33.702583  488746 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Running}}
	I0317 12:53:33.721528  488746 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
	I0317 12:53:33.741711  488746 cli_runner.go:164] Run: docker exec functional-207072 stat /var/lib/dpkg/alternatives/iptables
	I0317 12:53:33.787770  488746 oci.go:144] the created container "functional-207072" has a running status.
	I0317 12:53:33.787794  488746 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa...
	I0317 12:53:34.292082  488746 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0317 12:53:34.312919  488746 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
	I0317 12:53:34.335190  488746 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0317 12:53:34.335206  488746 kic_runner.go:114] Args: [docker exec --privileged functional-207072 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0317 12:53:34.384522  488746 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
	I0317 12:53:34.405142  488746 machine.go:93] provisionDockerMachine start ...
	I0317 12:53:34.405237  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:34.427007  488746 main.go:141] libmachine: Using SSH client type: native
	I0317 12:53:34.427445  488746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33160 <nil> <nil>}
	I0317 12:53:34.427467  488746 main.go:141] libmachine: About to run SSH command:
	hostname
	I0317 12:53:34.564695  488746 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-207072
	
	I0317 12:53:34.564716  488746 ubuntu.go:169] provisioning hostname "functional-207072"
	I0317 12:53:34.564787  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:34.583624  488746 main.go:141] libmachine: Using SSH client type: native
	I0317 12:53:34.583839  488746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33160 <nil> <nil>}
	I0317 12:53:34.583853  488746 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-207072 && echo "functional-207072" | sudo tee /etc/hostname
	I0317 12:53:34.736101  488746 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-207072
	
	I0317 12:53:34.736180  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:34.754717  488746 main.go:141] libmachine: Using SSH client type: native
	I0317 12:53:34.754957  488746 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 33160 <nil> <nil>}
	I0317 12:53:34.754975  488746 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-207072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-207072/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-207072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0317 12:53:34.892784  488746 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0317 12:53:34.892811  488746 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20539-446828/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-446828/.minikube}
	I0317 12:53:34.892865  488746 ubuntu.go:177] setting up certificates
	I0317 12:53:34.892875  488746 provision.go:84] configureAuth start
	I0317 12:53:34.892938  488746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-207072
	I0317 12:53:34.910600  488746 provision.go:143] copyHostCerts
	I0317 12:53:34.910660  488746 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem, removing ...
	I0317 12:53:34.910669  488746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem
	I0317 12:53:34.910731  488746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/ca.pem (1082 bytes)
	I0317 12:53:34.910831  488746 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem, removing ...
	I0317 12:53:34.910835  488746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem
	I0317 12:53:34.910857  488746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/cert.pem (1123 bytes)
	I0317 12:53:34.910917  488746 exec_runner.go:144] found /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem, removing ...
	I0317 12:53:34.910920  488746 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem
	I0317 12:53:34.910938  488746 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-446828/.minikube/key.pem (1675 bytes)
	I0317 12:53:34.910992  488746 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem org=jenkins.functional-207072 san=[127.0.0.1 192.168.49.2 functional-207072 localhost minikube]
	I0317 12:53:35.175535  488746 provision.go:177] copyRemoteCerts
	I0317 12:53:35.175584  488746 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0317 12:53:35.175629  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:35.193830  488746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 12:53:35.289400  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0317 12:53:35.313259  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0317 12:53:35.337255  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0317 12:53:35.361992  488746 provision.go:87] duration metric: took 469.10151ms to configureAuth
	I0317 12:53:35.362020  488746 ubuntu.go:193] setting minikube options for container-runtime
	I0317 12:53:35.362192  488746 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:53:35.362197  488746 machine.go:96] duration metric: took 957.041474ms to provisionDockerMachine
	I0317 12:53:35.362203  488746 client.go:171] duration metric: took 7.229784424s to LocalClient.Create
	I0317 12:53:35.362219  488746 start.go:167] duration metric: took 7.229827042s to libmachine.API.Create "functional-207072"
	I0317 12:53:35.362225  488746 start.go:293] postStartSetup for "functional-207072" (driver="docker")
	I0317 12:53:35.362231  488746 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0317 12:53:35.362276  488746 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0317 12:53:35.362308  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:35.381021  488746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 12:53:35.478286  488746 ssh_runner.go:195] Run: cat /etc/os-release
	I0317 12:53:35.482032  488746 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0317 12:53:35.482055  488746 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0317 12:53:35.482066  488746 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0317 12:53:35.482074  488746 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0317 12:53:35.482088  488746 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/addons for local assets ...
	I0317 12:53:35.482143  488746 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-446828/.minikube/files for local assets ...
	I0317 12:53:35.482222  488746 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-446828/.minikube/files/etc/ssl/certs/4537322.pem -> 4537322.pem in /etc/ssl/certs
	I0317 12:53:35.482289  488746 filesync.go:149] local asset: /home/jenkins/minikube-integration/20539-446828/.minikube/files/etc/test/nested/copy/453732/hosts -> hosts in /etc/test/nested/copy/453732
	I0317 12:53:35.482324  488746 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/453732
	I0317 12:53:35.491447  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/files/etc/ssl/certs/4537322.pem --> /etc/ssl/certs/4537322.pem (1708 bytes)
	I0317 12:53:35.516754  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/files/etc/test/nested/copy/453732/hosts --> /etc/test/nested/copy/453732/hosts (40 bytes)
	I0317 12:53:35.541631  488746 start.go:296] duration metric: took 179.385147ms for postStartSetup
	I0317 12:53:35.542036  488746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-207072
	I0317 12:53:35.560473  488746 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/config.json ...
	I0317 12:53:35.560734  488746 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 12:53:35.560772  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:35.578987  488746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 12:53:35.673544  488746 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0317 12:53:35.677962  488746 start.go:128] duration metric: took 7.548058448s to createHost
	I0317 12:53:35.677981  488746 start.go:83] releasing machines lock for "functional-207072", held for 7.548170362s
	I0317 12:53:35.678059  488746 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-207072
	I0317 12:53:35.699899  488746 out.go:177] * Found network options:
	I0317 12:53:35.701975  488746 out.go:177]   - HTTP_PROXY=localhost:37495
	W0317 12:53:35.703245  488746 out.go:270] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I0317 12:53:35.704478  488746 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I0317 12:53:35.705697  488746 ssh_runner.go:195] Run: cat /version.json
	I0317 12:53:35.705761  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:35.705785  488746 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0317 12:53:35.705838  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:35.724304  488746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 12:53:35.724449  488746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 12:53:35.888183  488746 ssh_runner.go:195] Run: systemctl --version
	I0317 12:53:35.892795  488746 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0317 12:53:35.897365  488746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0317 12:53:35.923689  488746 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0317 12:53:35.923758  488746 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0317 12:53:35.951954  488746 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0317 12:53:35.951971  488746 start.go:495] detecting cgroup driver to use...
	I0317 12:53:35.952006  488746 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0317 12:53:35.952046  488746 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0317 12:53:35.966174  488746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0317 12:53:35.978447  488746 docker.go:217] disabling cri-docker service (if available) ...
	I0317 12:53:35.978516  488746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0317 12:53:35.992193  488746 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0317 12:53:36.006342  488746 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0317 12:53:36.086045  488746 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0317 12:53:36.166276  488746 docker.go:233] disabling docker service ...
	I0317 12:53:36.166334  488746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0317 12:53:36.185430  488746 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0317 12:53:36.197567  488746 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0317 12:53:36.281206  488746 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0317 12:53:36.356634  488746 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0317 12:53:36.367926  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0317 12:53:36.384469  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0317 12:53:36.394994  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0317 12:53:36.406031  488746 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0317 12:53:36.406092  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0317 12:53:36.416564  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:53:36.427016  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0317 12:53:36.437428  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0317 12:53:36.447689  488746 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0317 12:53:36.458083  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0317 12:53:36.469136  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0317 12:53:36.479776  488746 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0317 12:53:36.490449  488746 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0317 12:53:36.499711  488746 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0317 12:53:36.509107  488746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:53:36.584490  488746 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0317 12:53:36.698931  488746 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0317 12:53:36.698987  488746 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0317 12:53:36.703027  488746 start.go:563] Will wait 60s for crictl version
	I0317 12:53:36.703094  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:53:36.706751  488746 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0317 12:53:36.744801  488746 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.25
	RuntimeApiVersion:  v1
	I0317 12:53:36.744888  488746 ssh_runner.go:195] Run: containerd --version
	I0317 12:53:36.772588  488746 ssh_runner.go:195] Run: containerd --version
	I0317 12:53:36.799740  488746 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.25 ...
	I0317 12:53:36.801173  488746 cli_runner.go:164] Run: docker network inspect functional-207072 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0317 12:53:36.819512  488746 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0317 12:53:36.823636  488746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:53:36.835533  488746 kubeadm.go:883] updating cluster {Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0317 12:53:36.835664  488746 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:53:36.835731  488746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:53:36.870770  488746 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:53:36.870787  488746 containerd.go:534] Images already preloaded, skipping extraction
	I0317 12:53:36.870855  488746 ssh_runner.go:195] Run: sudo crictl images --output json
	I0317 12:53:36.904740  488746 containerd.go:627] all images are preloaded for containerd runtime.
	I0317 12:53:36.904757  488746 cache_images.go:84] Images are preloaded, skipping loading
	I0317 12:53:36.904766  488746 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.32.2 containerd true true} ...
	I0317 12:53:36.904881  488746 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-207072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0317 12:53:36.904950  488746 ssh_runner.go:195] Run: sudo crictl info
	I0317 12:53:36.939957  488746 cni.go:84] Creating CNI manager for ""
	I0317 12:53:36.939966  488746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:53:36.939975  488746 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0317 12:53:36.939993  488746 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-207072 NodeName:functional-207072 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0317 12:53:36.940106  488746 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-207072"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0317 12:53:36.940176  488746 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0317 12:53:36.949469  488746 binaries.go:44] Found k8s binaries, skipping transfer
	I0317 12:53:36.949535  488746 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0317 12:53:36.958614  488746 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0317 12:53:36.976374  488746 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0317 12:53:36.994379  488746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
	I0317 12:53:37.013033  488746 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0317 12:53:37.016823  488746 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0317 12:53:37.028436  488746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:53:37.108740  488746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:53:37.122378  488746 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072 for IP: 192.168.49.2
	I0317 12:53:37.122398  488746 certs.go:194] generating shared ca certs ...
	I0317 12:53:37.122418  488746 certs.go:226] acquiring lock for ca certs: {Name:mk0dd75eca163be7a048e137f4b2d32cf3ae35d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:37.122583  488746 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key
	I0317 12:53:37.122613  488746 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key
	I0317 12:53:37.122620  488746 certs.go:256] generating profile certs ...
	I0317 12:53:37.122671  488746 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.key
	I0317 12:53:37.122679  488746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt with IP's: []
	I0317 12:53:37.234043  488746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt ...
	I0317 12:53:37.234063  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: {Name:mk7244c4649bd973b9bcc714e6a18baf7cb30762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:37.234269  488746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.key ...
	I0317 12:53:37.234284  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.key: {Name:mkb8aee26b8be14bfcaaa354dbf9d2d38509cca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:37.234364  488746 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.key.fedffdfc
	I0317 12:53:37.234373  488746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.crt.fedffdfc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0317 12:53:37.347036  488746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.crt.fedffdfc ...
	I0317 12:53:37.347057  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.crt.fedffdfc: {Name:mk2621773c7fc2cc5c44210ee384504fed601957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:37.347238  488746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.key.fedffdfc ...
	I0317 12:53:37.347246  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.key.fedffdfc: {Name:mkd9ff3afade3a5d755ff24ca85d701279160894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:37.347310  488746 certs.go:381] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.crt.fedffdfc -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.crt
	I0317 12:53:37.347398  488746 certs.go:385] copying /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.key.fedffdfc -> /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.key
	I0317 12:53:37.347449  488746 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.key
	I0317 12:53:37.347461  488746 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.crt with IP's: []
	I0317 12:53:37.547424  488746 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.crt ...
	I0317 12:53:37.547444  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.crt: {Name:mk91f6e1dc4431f3dde9e01eabd1d63575c5c8fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:37.547625  488746 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.key ...
	I0317 12:53:37.547633  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.key: {Name:mk56945c2881b246c14edc96c7aabed41db26499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:37.547801  488746 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/453732.pem (1338 bytes)
	W0317 12:53:37.547837  488746 certs.go:480] ignoring /home/jenkins/minikube-integration/20539-446828/.minikube/certs/453732_empty.pem, impossibly tiny 0 bytes
	I0317 12:53:37.547845  488746 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca-key.pem (1675 bytes)
	I0317 12:53:37.547866  488746 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/ca.pem (1082 bytes)
	I0317 12:53:37.547885  488746 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/cert.pem (1123 bytes)
	I0317 12:53:37.547904  488746 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/certs/key.pem (1675 bytes)
	I0317 12:53:37.547938  488746 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-446828/.minikube/files/etc/ssl/certs/4537322.pem (1708 bytes)
	I0317 12:53:37.548566  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0317 12:53:37.574833  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0317 12:53:37.600719  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0317 12:53:37.626204  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0317 12:53:37.651336  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0317 12:53:37.676471  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0317 12:53:37.702097  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0317 12:53:37.728039  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0317 12:53:37.753805  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0317 12:53:37.780243  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/certs/453732.pem --> /usr/share/ca-certificates/453732.pem (1338 bytes)
	I0317 12:53:37.805474  488746 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-446828/.minikube/files/etc/ssl/certs/4537322.pem --> /usr/share/ca-certificates/4537322.pem (1708 bytes)
	I0317 12:53:37.831821  488746 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0317 12:53:37.850716  488746 ssh_runner.go:195] Run: openssl version
	I0317 12:53:37.856478  488746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0317 12:53:37.866831  488746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:53:37.871055  488746 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:39 /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:53:37.871114  488746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0317 12:53:37.878513  488746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0317 12:53:37.888601  488746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/453732.pem && ln -fs /usr/share/ca-certificates/453732.pem /etc/ssl/certs/453732.pem"
	I0317 12:53:37.898550  488746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/453732.pem
	I0317 12:53:37.902564  488746 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 17 12:53 /usr/share/ca-certificates/453732.pem
	I0317 12:53:37.902625  488746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/453732.pem
	I0317 12:53:37.910206  488746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/453732.pem /etc/ssl/certs/51391683.0"
	I0317 12:53:37.920783  488746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4537322.pem && ln -fs /usr/share/ca-certificates/4537322.pem /etc/ssl/certs/4537322.pem"
	I0317 12:53:37.931740  488746 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4537322.pem
	I0317 12:53:37.936147  488746 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 17 12:53 /usr/share/ca-certificates/4537322.pem
	I0317 12:53:37.936211  488746 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4537322.pem
	I0317 12:53:37.943450  488746 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4537322.pem /etc/ssl/certs/3ec20f2e.0"
	I0317 12:53:37.954335  488746 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0317 12:53:37.958280  488746 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0317 12:53:37.958328  488746 kubeadm.go:392] StartCluster: {Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:53:37.958400  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0317 12:53:37.958451  488746 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0317 12:53:37.995429  488746 cri.go:89] found id: ""
	I0317 12:53:37.995496  488746 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0317 12:53:38.004856  488746 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0317 12:53:38.014605  488746 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0317 12:53:38.014662  488746 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0317 12:53:38.024821  488746 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0317 12:53:38.024834  488746 kubeadm.go:157] found existing configuration files:
	
	I0317 12:53:38.024894  488746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0317 12:53:38.034202  488746 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0317 12:53:38.034253  488746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0317 12:53:38.043713  488746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0317 12:53:38.052880  488746 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0317 12:53:38.052929  488746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0317 12:53:38.062179  488746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0317 12:53:38.071310  488746 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0317 12:53:38.071371  488746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0317 12:53:38.080647  488746 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0317 12:53:38.089899  488746 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0317 12:53:38.089946  488746 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0317 12:53:38.098666  488746 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0317 12:53:38.157990  488746 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0317 12:53:38.158236  488746 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0317 12:53:38.217684  488746 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0317 12:53:48.158554  488746 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0317 12:53:48.158600  488746 kubeadm.go:310] [preflight] Running pre-flight checks
	I0317 12:53:48.158740  488746 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0317 12:53:48.158832  488746 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0317 12:53:48.158897  488746 kubeadm.go:310] OS: Linux
	I0317 12:53:48.158963  488746 kubeadm.go:310] CGROUPS_CPU: enabled
	I0317 12:53:48.159027  488746 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0317 12:53:48.159101  488746 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0317 12:53:48.159148  488746 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0317 12:53:48.159229  488746 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0317 12:53:48.159302  488746 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0317 12:53:48.159368  488746 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0317 12:53:48.159411  488746 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0317 12:53:48.159455  488746 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0317 12:53:48.159554  488746 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0317 12:53:48.159655  488746 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0317 12:53:48.159799  488746 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0317 12:53:48.159885  488746 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0317 12:53:48.161743  488746 out.go:235]   - Generating certificates and keys ...
	I0317 12:53:48.161871  488746 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0317 12:53:48.161954  488746 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0317 12:53:48.162040  488746 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0317 12:53:48.162120  488746 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0317 12:53:48.162171  488746 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0317 12:53:48.162233  488746 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0317 12:53:48.162274  488746 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0317 12:53:48.162385  488746 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [functional-207072 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:53:48.162437  488746 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0317 12:53:48.162558  488746 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [functional-207072 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0317 12:53:48.162626  488746 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0317 12:53:48.162699  488746 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0317 12:53:48.162744  488746 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0317 12:53:48.162785  488746 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0317 12:53:48.162830  488746 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0317 12:53:48.162878  488746 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0317 12:53:48.162933  488746 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0317 12:53:48.163059  488746 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0317 12:53:48.163121  488746 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0317 12:53:48.163191  488746 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0317 12:53:48.163257  488746 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0317 12:53:48.165130  488746 out.go:235]   - Booting up control plane ...
	I0317 12:53:48.165220  488746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0317 12:53:48.165282  488746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0317 12:53:48.165331  488746 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0317 12:53:48.165427  488746 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0317 12:53:48.165497  488746 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0317 12:53:48.165526  488746 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0317 12:53:48.165650  488746 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0317 12:53:48.165772  488746 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0317 12:53:48.165820  488746 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.883046ms
	I0317 12:53:48.165887  488746 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0317 12:53:48.165955  488746 kubeadm.go:310] [api-check] The API server is healthy after 5.009511411s
	I0317 12:53:48.166058  488746 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0317 12:53:48.166208  488746 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0317 12:53:48.166275  488746 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0317 12:53:48.166497  488746 kubeadm.go:310] [mark-control-plane] Marking the node functional-207072 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0317 12:53:48.166563  488746 kubeadm.go:310] [bootstrap-token] Using token: ry5vw2.j4d8awms6mh06hbk
	I0317 12:53:48.168178  488746 out.go:235]   - Configuring RBAC rules ...
	I0317 12:53:48.168281  488746 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0317 12:53:48.168394  488746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0317 12:53:48.168512  488746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0317 12:53:48.168637  488746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0317 12:53:48.168763  488746 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0317 12:53:48.168884  488746 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0317 12:53:48.169002  488746 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0317 12:53:48.169035  488746 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0317 12:53:48.169069  488746 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0317 12:53:48.169071  488746 kubeadm.go:310] 
	I0317 12:53:48.169119  488746 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0317 12:53:48.169125  488746 kubeadm.go:310] 
	I0317 12:53:48.169183  488746 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0317 12:53:48.169185  488746 kubeadm.go:310] 
	I0317 12:53:48.169220  488746 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0317 12:53:48.169285  488746 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0317 12:53:48.169326  488746 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0317 12:53:48.169329  488746 kubeadm.go:310] 
	I0317 12:53:48.169378  488746 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0317 12:53:48.169382  488746 kubeadm.go:310] 
	I0317 12:53:48.169421  488746 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0317 12:53:48.169423  488746 kubeadm.go:310] 
	I0317 12:53:48.169466  488746 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0317 12:53:48.169527  488746 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0317 12:53:48.169593  488746 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0317 12:53:48.169597  488746 kubeadm.go:310] 
	I0317 12:53:48.169678  488746 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0317 12:53:48.169753  488746 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0317 12:53:48.169756  488746 kubeadm.go:310] 
	I0317 12:53:48.169840  488746 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8441 --token ry5vw2.j4d8awms6mh06hbk \
	I0317 12:53:48.169977  488746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 \
	I0317 12:53:48.170009  488746 kubeadm.go:310] 	--control-plane 
	I0317 12:53:48.170013  488746 kubeadm.go:310] 
	I0317 12:53:48.170102  488746 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0317 12:53:48.170104  488746 kubeadm.go:310] 
	I0317 12:53:48.170226  488746 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8441 --token ry5vw2.j4d8awms6mh06hbk \
	I0317 12:53:48.170354  488746 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e05049646db0098d7df87a082a7b96dd6c54c151b6030ddf1f26dcd0982d4713 
	I0317 12:53:48.170374  488746 cni.go:84] Creating CNI manager for ""
	I0317 12:53:48.170382  488746 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:53:48.171902  488746 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0317 12:53:48.173298  488746 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0317 12:53:48.177851  488746 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0317 12:53:48.177866  488746 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0317 12:53:48.197403  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0317 12:53:48.431104  488746 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0317 12:53:48.431189  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:48.431195  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes functional-207072 minikube.k8s.io/updated_at=2025_03_17T12_53_48_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=functional-207072 minikube.k8s.io/primary=true
	I0317 12:53:48.439163  488746 ops.go:34] apiserver oom_adj: -16
	I0317 12:53:48.570348  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:49.070444  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:49.571312  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:50.071322  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:50.570421  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:51.071209  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:51.570566  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:52.070635  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:52.571012  488746 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0317 12:53:52.667670  488746 kubeadm.go:1113] duration metric: took 4.236550565s to wait for elevateKubeSystemPrivileges
	I0317 12:53:52.667710  488746 kubeadm.go:394] duration metric: took 14.709387797s to StartCluster
	I0317 12:53:52.667736  488746 settings.go:142] acquiring lock: {Name:mk72030e2b6f80365da0b928b8b3c5c72d9da724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:52.667816  488746 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:53:52.668512  488746 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/kubeconfig: {Name:mk0cd04f754d83d5d928c90de569ec9144a7d4e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:53:52.668752  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0317 12:53:52.668794  488746 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0317 12:53:52.668840  488746 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0317 12:53:52.668918  488746 addons.go:69] Setting storage-provisioner=true in profile "functional-207072"
	I0317 12:53:52.668935  488746 addons.go:238] Setting addon storage-provisioner=true in "functional-207072"
	I0317 12:53:52.668951  488746 addons.go:69] Setting default-storageclass=true in profile "functional-207072"
	I0317 12:53:52.668966  488746 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-207072"
	I0317 12:53:52.668969  488746 host.go:66] Checking if "functional-207072" exists ...
	I0317 12:53:52.669058  488746 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 12:53:52.669385  488746 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
	I0317 12:53:52.669545  488746 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
	I0317 12:53:52.670153  488746 out.go:177] * Verifying Kubernetes components...
	I0317 12:53:52.671332  488746 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0317 12:53:52.693207  488746 addons.go:238] Setting addon default-storageclass=true in "functional-207072"
	I0317 12:53:52.693239  488746 host.go:66] Checking if "functional-207072" exists ...
	I0317 12:53:52.693568  488746 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0317 12:53:52.693578  488746 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
	I0317 12:53:52.695043  488746 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:53:52.695057  488746 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0317 12:53:52.695124  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:52.722339  488746 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0317 12:53:52.722353  488746 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0317 12:53:52.722430  488746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 12:53:52.722523  488746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 12:53:52.741540  488746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 12:53:52.782664  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0317 12:53:52.850296  488746 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0317 12:53:53.046164  488746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0317 12:53:53.049189  488746 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0317 12:53:53.462363  488746 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0317 12:53:53.464818  488746 node_ready.go:35] waiting up to 6m0s for node "functional-207072" to be "Ready" ...
	I0317 12:53:53.475853  488746 node_ready.go:49] node "functional-207072" has status "Ready":"True"
	I0317 12:53:53.475870  488746 node_ready.go:38] duration metric: took 11.032015ms for node "functional-207072" to be "Ready" ...
	I0317 12:53:53.475881  488746 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:53:53.482109  488746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9ch8s" in "kube-system" namespace to be "Ready" ...
	I0317 12:53:53.779139  488746 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0317 12:53:53.781571  488746 addons.go:514] duration metric: took 1.112689307s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0317 12:53:53.968082  488746 kapi.go:214] "coredns" deployment in "kube-system" namespace and "functional-207072" context rescaled to 1 replicas
	I0317 12:53:54.985523  488746 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-9ch8s" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-9ch8s" not found
	I0317 12:53:54.985545  488746 pod_ready.go:82] duration metric: took 1.503402859s for pod "coredns-668d6bf9bc-9ch8s" in "kube-system" namespace to be "Ready" ...
	E0317 12:53:54.985557  488746 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-9ch8s" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-9ch8s" not found
	I0317 12:53:54.985578  488746 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace to be "Ready" ...
	I0317 12:53:56.991718  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:53:59.491741  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:01.991533  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:04.491848  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:06.991525  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:09.491165  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:11.991027  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:13.991600  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:16.491573  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:18.991104  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:21.491008  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:23.491738  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:25.991358  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:28.491057  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:30.491753  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:32.991784  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:35.491349  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:37.990488  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:39.991299  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:42.490301  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:44.491522  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:46.991654  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:48.991800  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:51.491483  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:53.991096  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:55.991500  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:54:57.991539  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:00.491750  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:02.991331  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:05.491208  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:07.491378  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:09.991863  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:12.491213  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:14.491932  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:16.991347  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:18.991393  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:20.992084  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:22.992240  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:25.491847  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:27.991819  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:29.993391  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:32.491324  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:34.491562  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:36.991451  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:38.991591  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:40.992165  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:43.491313  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:45.991422  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:47.991578  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:49.993417  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:52.490735  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:54.491473  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:56.491837  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:55:58.491870  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:00.991448  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:03.491005  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:05.991518  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:08.490685  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:10.491969  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:12.492142  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:14.991376  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:16.991570  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:18.991650  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:21.492285  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:23.990695  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:25.991860  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:28.491864  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:30.492082  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:32.992178  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:35.491031  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:37.991549  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:40.492207  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:42.990937  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:45.491033  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:47.491198  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:49.991806  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:52.490657  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:54.491063  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:56.491359  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:56:58.992032  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:01.491658  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:03.991067  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:06.491496  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:08.990928  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:11.491309  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:13.491849  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:15.991422  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:18.490885  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:20.491390  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:22.991365  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:25.490923  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:27.491797  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:29.991526  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:32.492393  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:34.991638  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:37.491635  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:39.493016  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:41.991135  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:44.491020  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:46.992128  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:49.492081  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:51.991785  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:54.492106  488746 pod_ready.go:103] pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace has status "Ready":"False"
	I0317 12:57:54.991720  488746 pod_ready.go:82] duration metric: took 4m0.006128022s for pod "coredns-668d6bf9bc-r9f6m" in "kube-system" namespace to be "Ready" ...
	E0317 12:57:54.991733  488746 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0317 12:57:54.991751  488746 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:54.996779  488746 pod_ready.go:93] pod "etcd-functional-207072" in "kube-system" namespace has status "Ready":"True"
	I0317 12:57:54.996790  488746 pod_ready.go:82] duration metric: took 5.034591ms for pod "etcd-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:54.996800  488746 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.001646  488746 pod_ready.go:93] pod "kube-apiserver-functional-207072" in "kube-system" namespace has status "Ready":"True"
	I0317 12:57:55.001658  488746 pod_ready.go:82] duration metric: took 4.853496ms for pod "kube-apiserver-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.001668  488746 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.006013  488746 pod_ready.go:93] pod "kube-controller-manager-functional-207072" in "kube-system" namespace has status "Ready":"True"
	I0317 12:57:55.006026  488746 pod_ready.go:82] duration metric: took 4.352057ms for pod "kube-controller-manager-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.006037  488746 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-z27vj" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.010593  488746 pod_ready.go:93] pod "kube-proxy-z27vj" in "kube-system" namespace has status "Ready":"True"
	I0317 12:57:55.010607  488746 pod_ready.go:82] duration metric: took 4.562929ms for pod "kube-proxy-z27vj" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.010615  488746 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.389943  488746 pod_ready.go:93] pod "kube-scheduler-functional-207072" in "kube-system" namespace has status "Ready":"True"
	I0317 12:57:55.389956  488746 pod_ready.go:82] duration metric: took 379.335636ms for pod "kube-scheduler-functional-207072" in "kube-system" namespace to be "Ready" ...
	I0317 12:57:55.389962  488746 pod_ready.go:39] duration metric: took 4m1.914068741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0317 12:57:55.389999  488746 api_server.go:52] waiting for apiserver process to appear ...
	I0317 12:57:55.390046  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:57:55.390103  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:57:55.429098  488746 cri.go:89] found id: "d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10"
	I0317 12:57:55.429118  488746 cri.go:89] found id: ""
	I0317 12:57:55.429128  488746 logs.go:282] 1 containers: [d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10]
	I0317 12:57:55.429189  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:55.433158  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:57:55.433223  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:57:55.475830  488746 cri.go:89] found id: "7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0"
	I0317 12:57:55.475846  488746 cri.go:89] found id: ""
	I0317 12:57:55.475855  488746 logs.go:282] 1 containers: [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0]
	I0317 12:57:55.475919  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:55.480820  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:57:55.480956  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:57:55.517080  488746 cri.go:89] found id: ""
	I0317 12:57:55.517107  488746 logs.go:282] 0 containers: []
	W0317 12:57:55.517117  488746 logs.go:284] No container was found matching "coredns"
	I0317 12:57:55.517125  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:57:55.517205  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:57:55.571781  488746 cri.go:89] found id: "106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe"
	I0317 12:57:55.571802  488746 cri.go:89] found id: ""
	I0317 12:57:55.571812  488746 logs.go:282] 1 containers: [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe]
	I0317 12:57:55.571877  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:55.575936  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:57:55.576019  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:57:55.612542  488746 cri.go:89] found id: "1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf"
	I0317 12:57:55.612559  488746 cri.go:89] found id: ""
	I0317 12:57:55.612569  488746 logs.go:282] 1 containers: [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf]
	I0317 12:57:55.612633  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:55.616456  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:57:55.616535  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:57:55.652252  488746 cri.go:89] found id: "bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5"
	I0317 12:57:55.652267  488746 cri.go:89] found id: ""
	I0317 12:57:55.652274  488746 logs.go:282] 1 containers: [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5]
	I0317 12:57:55.652382  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:55.656604  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:57:55.656679  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:57:55.695329  488746 cri.go:89] found id: ""
	I0317 12:57:55.695346  488746 logs.go:282] 0 containers: []
	W0317 12:57:55.695354  488746 logs.go:284] No container was found matching "kindnet"
	I0317 12:57:55.695369  488746 logs.go:123] Gathering logs for kubelet ...
	I0317 12:57:55.695383  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:57:55.792033  488746 logs.go:123] Gathering logs for dmesg ...
	I0317 12:57:55.792075  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:57:55.816880  488746 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:57:55.816904  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:57:55.909954  488746 logs.go:123] Gathering logs for kube-controller-manager [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5] ...
	I0317 12:57:55.909985  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5"
	I0317 12:57:55.959058  488746 logs.go:123] Gathering logs for kube-apiserver [d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10] ...
	I0317 12:57:55.959084  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10"
	I0317 12:57:56.004620  488746 logs.go:123] Gathering logs for etcd [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0] ...
	I0317 12:57:56.004654  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0"
	I0317 12:57:56.046475  488746 logs.go:123] Gathering logs for kube-scheduler [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe] ...
	I0317 12:57:56.046497  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe"
	I0317 12:57:56.093549  488746 logs.go:123] Gathering logs for kube-proxy [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf] ...
	I0317 12:57:56.093576  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf"
	I0317 12:57:56.131748  488746 logs.go:123] Gathering logs for containerd ...
	I0317 12:57:56.131790  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:57:56.177207  488746 logs.go:123] Gathering logs for container status ...
	I0317 12:57:56.177241  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:57:58.718896  488746 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 12:57:58.731404  488746 api_server.go:72] duration metric: took 4m6.062576707s to wait for apiserver process to appear ...
	I0317 12:57:58.731422  488746 api_server.go:88] waiting for apiserver healthz status ...
	I0317 12:57:58.731460  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:57:58.731511  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:57:58.768500  488746 cri.go:89] found id: "d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10"
	I0317 12:57:58.768519  488746 cri.go:89] found id: ""
	I0317 12:57:58.768527  488746 logs.go:282] 1 containers: [d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10]
	I0317 12:57:58.768583  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:58.772692  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:57:58.772759  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:57:58.808823  488746 cri.go:89] found id: "7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0"
	I0317 12:57:58.808840  488746 cri.go:89] found id: ""
	I0317 12:57:58.808850  488746 logs.go:282] 1 containers: [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0]
	I0317 12:57:58.808915  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:58.812725  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:57:58.812791  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:57:58.848257  488746 cri.go:89] found id: ""
	I0317 12:57:58.848274  488746 logs.go:282] 0 containers: []
	W0317 12:57:58.848281  488746 logs.go:284] No container was found matching "coredns"
	I0317 12:57:58.848289  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:57:58.848383  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:57:58.884900  488746 cri.go:89] found id: "106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe"
	I0317 12:57:58.884916  488746 cri.go:89] found id: ""
	I0317 12:57:58.884924  488746 logs.go:282] 1 containers: [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe]
	I0317 12:57:58.884975  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:58.889145  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:57:58.889206  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:57:58.925119  488746 cri.go:89] found id: "1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf"
	I0317 12:57:58.925133  488746 cri.go:89] found id: ""
	I0317 12:57:58.925140  488746 logs.go:282] 1 containers: [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf]
	I0317 12:57:58.925189  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:58.929036  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:57:58.929124  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:57:58.966358  488746 cri.go:89] found id: "bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5"
	I0317 12:57:58.966387  488746 cri.go:89] found id: ""
	I0317 12:57:58.966397  488746 logs.go:282] 1 containers: [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5]
	I0317 12:57:58.966461  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:57:58.970779  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:57:58.970846  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:57:59.009399  488746 cri.go:89] found id: ""
	I0317 12:57:59.009417  488746 logs.go:282] 0 containers: []
	W0317 12:57:59.009425  488746 logs.go:284] No container was found matching "kindnet"
	I0317 12:57:59.009450  488746 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:57:59.009461  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:57:59.102749  488746 logs.go:123] Gathering logs for kube-apiserver [d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10] ...
	I0317 12:57:59.102771  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10"
	I0317 12:57:59.146488  488746 logs.go:123] Gathering logs for kube-scheduler [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe] ...
	I0317 12:57:59.146514  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe"
	I0317 12:57:59.192442  488746 logs.go:123] Gathering logs for kube-proxy [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf] ...
	I0317 12:57:59.192475  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf"
	I0317 12:57:59.233627  488746 logs.go:123] Gathering logs for containerd ...
	I0317 12:57:59.233651  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:57:59.279218  488746 logs.go:123] Gathering logs for container status ...
	I0317 12:57:59.279246  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:57:59.320252  488746 logs.go:123] Gathering logs for kubelet ...
	I0317 12:57:59.320274  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:57:59.410796  488746 logs.go:123] Gathering logs for dmesg ...
	I0317 12:57:59.410832  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:57:59.435699  488746 logs.go:123] Gathering logs for etcd [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0] ...
	I0317 12:57:59.435726  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0"
	I0317 12:57:59.485243  488746 logs.go:123] Gathering logs for kube-controller-manager [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5] ...
	I0317 12:57:59.485268  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5"
	I0317 12:58:02.036160  488746 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0317 12:58:02.040290  488746 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0317 12:58:02.041325  488746 api_server.go:141] control plane version: v1.32.2
	I0317 12:58:02.041350  488746 api_server.go:131] duration metric: took 3.309922607s to wait for apiserver health ...
	I0317 12:58:02.041360  488746 system_pods.go:43] waiting for kube-system pods to appear ...
	I0317 12:58:02.041385  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0317 12:58:02.041434  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0317 12:58:02.078216  488746 cri.go:89] found id: "d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10"
	I0317 12:58:02.078231  488746 cri.go:89] found id: ""
	I0317 12:58:02.078239  488746 logs.go:282] 1 containers: [d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10]
	I0317 12:58:02.078292  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:58:02.082649  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0317 12:58:02.082726  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0317 12:58:02.119366  488746 cri.go:89] found id: "7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0"
	I0317 12:58:02.119381  488746 cri.go:89] found id: ""
	I0317 12:58:02.119389  488746 logs.go:282] 1 containers: [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0]
	I0317 12:58:02.119457  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:58:02.124152  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0317 12:58:02.124228  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0317 12:58:02.161410  488746 cri.go:89] found id: ""
	I0317 12:58:02.161427  488746 logs.go:282] 0 containers: []
	W0317 12:58:02.161434  488746 logs.go:284] No container was found matching "coredns"
	I0317 12:58:02.161441  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0317 12:58:02.161494  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0317 12:58:02.199706  488746 cri.go:89] found id: "106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe"
	I0317 12:58:02.199720  488746 cri.go:89] found id: ""
	I0317 12:58:02.199727  488746 logs.go:282] 1 containers: [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe]
	I0317 12:58:02.199788  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:58:02.203599  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0317 12:58:02.203671  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0317 12:58:02.240167  488746 cri.go:89] found id: "1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf"
	I0317 12:58:02.240183  488746 cri.go:89] found id: ""
	I0317 12:58:02.240192  488746 logs.go:282] 1 containers: [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf]
	I0317 12:58:02.240255  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:58:02.244502  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0317 12:58:02.244596  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0317 12:58:02.281939  488746 cri.go:89] found id: "bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5"
	I0317 12:58:02.281960  488746 cri.go:89] found id: ""
	I0317 12:58:02.281972  488746 logs.go:282] 1 containers: [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5]
	I0317 12:58:02.282046  488746 ssh_runner.go:195] Run: which crictl
	I0317 12:58:02.286237  488746 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0317 12:58:02.286328  488746 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0317 12:58:02.322036  488746 cri.go:89] found id: ""
	I0317 12:58:02.322057  488746 logs.go:282] 0 containers: []
	W0317 12:58:02.322068  488746 logs.go:284] No container was found matching "kindnet"
	I0317 12:58:02.322087  488746 logs.go:123] Gathering logs for dmesg ...
	I0317 12:58:02.322105  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0317 12:58:02.347206  488746 logs.go:123] Gathering logs for kube-apiserver [d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10] ...
	I0317 12:58:02.347238  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10"
	I0317 12:58:02.393481  488746 logs.go:123] Gathering logs for etcd [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0] ...
	I0317 12:58:02.393510  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0"
	I0317 12:58:02.441042  488746 logs.go:123] Gathering logs for kube-proxy [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf] ...
	I0317 12:58:02.441072  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf"
	I0317 12:58:02.482854  488746 logs.go:123] Gathering logs for containerd ...
	I0317 12:58:02.482875  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0317 12:58:02.529772  488746 logs.go:123] Gathering logs for kubelet ...
	I0317 12:58:02.529801  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0317 12:58:02.620151  488746 logs.go:123] Gathering logs for describe nodes ...
	I0317 12:58:02.620180  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0317 12:58:02.716714  488746 logs.go:123] Gathering logs for kube-scheduler [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe] ...
	I0317 12:58:02.716738  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe"
	I0317 12:58:02.763027  488746 logs.go:123] Gathering logs for kube-controller-manager [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5] ...
	I0317 12:58:02.763055  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5"
	I0317 12:58:02.812128  488746 logs.go:123] Gathering logs for container status ...
	I0317 12:58:02.812186  488746 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0317 12:58:05.360063  488746 system_pods.go:59] 8 kube-system pods found
	I0317 12:58:05.360095  488746 system_pods.go:61] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:05.360102  488746 system_pods.go:61] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:05.360109  488746 system_pods.go:61] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:05.360112  488746 system_pods.go:61] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:05.360115  488746 system_pods.go:61] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:05.360118  488746 system_pods.go:61] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:05.360120  488746 system_pods.go:61] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:05.360122  488746 system_pods.go:61] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:05.360128  488746 system_pods.go:74] duration metric: took 3.318763132s to wait for pod list to return data ...
	I0317 12:58:05.360134  488746 default_sa.go:34] waiting for default service account to be created ...
	I0317 12:58:05.362701  488746 default_sa.go:45] found service account: "default"
	I0317 12:58:05.362717  488746 default_sa.go:55] duration metric: took 2.57834ms for default service account to be created ...
	I0317 12:58:05.362726  488746 system_pods.go:116] waiting for k8s-apps to be running ...
	I0317 12:58:05.365359  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:05.365381  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:05.365385  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:05.365392  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:05.365395  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:05.365398  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:05.365401  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:05.365404  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:05.365406  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:05.365427  488746 retry.go:31] will retry after 221.652727ms: missing components: kube-dns
	I0317 12:58:05.591734  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:05.591757  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:05.591761  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:05.591768  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:05.591770  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:05.591774  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:05.591776  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:05.591779  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:05.591781  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:05.591796  488746 retry.go:31] will retry after 306.733157ms: missing components: kube-dns
	I0317 12:58:05.902956  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:05.902979  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:05.902984  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:05.902993  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:05.902996  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:05.903000  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:05.903002  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:05.903004  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:05.903007  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:05.903021  488746 retry.go:31] will retry after 424.919233ms: missing components: kube-dns
	I0317 12:58:06.332057  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:06.332083  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:06.332089  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:06.332099  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:06.332102  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:06.332105  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:06.332107  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:06.332109  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:06.332114  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:06.332133  488746 retry.go:31] will retry after 586.204375ms: missing components: kube-dns
	I0317 12:58:06.922579  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:06.922600  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:06.922604  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:06.922613  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:06.922616  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:06.922619  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:06.922635  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:06.922637  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:06.922639  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:06.922658  488746 retry.go:31] will retry after 525.823022ms: missing components: kube-dns
	I0317 12:58:07.453339  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:07.453360  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:07.453364  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:07.453371  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:07.453374  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:07.453377  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:07.453380  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:07.453383  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:07.453386  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:07.453403  488746 retry.go:31] will retry after 681.552762ms: missing components: kube-dns
	I0317 12:58:08.139251  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:08.139274  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:08.139278  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:08.139285  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:08.139289  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:08.139293  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:08.139295  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:08.139297  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:08.139299  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:08.139314  488746 retry.go:31] will retry after 1.063132545s: missing components: kube-dns
	I0317 12:58:09.207344  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:09.207368  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:09.207373  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:09.207379  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:09.207382  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:09.207385  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:09.207387  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:09.207389  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:09.207392  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:09.207409  488746 retry.go:31] will retry after 1.257006143s: missing components: kube-dns
	I0317 12:58:10.469386  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:10.469408  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:10.469412  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:10.469420  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:10.469423  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:10.469426  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:10.469428  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:10.469430  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:10.469433  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:10.469448  488746 retry.go:31] will retry after 1.165983016s: missing components: kube-dns
	I0317 12:58:11.640635  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:11.640666  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:11.640674  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:11.640685  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:11.640690  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:11.640696  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:11.640700  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:11.640704  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:11.640708  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:11.640726  488746 retry.go:31] will retry after 1.907071971s: missing components: kube-dns
	I0317 12:58:13.552249  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:13.552270  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:13.552275  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:13.552282  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:13.552285  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:13.552288  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:13.552290  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:13.552293  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:13.552295  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:13.552310  488746 retry.go:31] will retry after 1.778980792s: missing components: kube-dns
	I0317 12:58:15.336479  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:15.336503  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:15.336508  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:15.336517  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:15.336520  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:15.336523  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:15.336526  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:15.336528  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:15.336530  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:15.336545  488746 retry.go:31] will retry after 2.908379906s: missing components: kube-dns
	I0317 12:58:18.249045  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:18.249068  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:18.249073  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:18.249079  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:18.249082  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:18.249085  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:18.249088  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:18.249090  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:18.249092  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:18.249107  488746 retry.go:31] will retry after 3.183193771s: missing components: kube-dns
	I0317 12:58:21.437537  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:21.437894  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:21.437902  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:21.437909  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:21.437913  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:21.437918  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:21.437921  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:21.437923  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:21.437925  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:21.437942  488746 retry.go:31] will retry after 3.908611062s: missing components: kube-dns
	I0317 12:58:25.352150  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:25.352171  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:25.352175  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:25.352182  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:25.352185  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:25.352188  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:25.352191  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:25.352193  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:25.352195  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:25.352210  488746 retry.go:31] will retry after 7.025607825s: missing components: kube-dns
	I0317 12:58:32.384110  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:32.384131  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:32.384136  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:32.384142  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:32.384145  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:32.384148  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:32.384151  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:32.384153  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:32.384155  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:32.384169  488746 retry.go:31] will retry after 6.257442745s: missing components: kube-dns
	I0317 12:58:38.645753  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:38.645774  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:38.645779  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:38.645788  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:38.645791  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:38.645794  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:38.645798  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:38.645800  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:38.645802  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:38.645821  488746 retry.go:31] will retry after 10.018860069s: missing components: kube-dns
	I0317 12:58:48.668999  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:58:48.669024  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:58:48.669040  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:58:48.669046  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:58:48.669050  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:58:48.669056  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:58:48.669058  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:58:48.669060  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:58:48.669062  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:58:48.669081  488746 retry.go:31] will retry after 12.16685685s: missing components: kube-dns
	I0317 12:59:00.841358  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:59:00.841380  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:59:00.841385  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:59:00.841392  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:59:00.841395  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:59:00.841398  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:59:00.841400  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:59:00.841402  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:59:00.841404  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:59:00.841419  488746 retry.go:31] will retry after 17.291453302s: missing components: kube-dns
	I0317 12:59:18.137888  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:59:18.137911  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:59:18.137915  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:59:18.137922  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:59:18.137925  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:59:18.137928  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:59:18.137930  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:59:18.137932  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:59:18.137934  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:59:18.137949  488746 retry.go:31] will retry after 14.896373823s: missing components: kube-dns
	I0317 12:59:33.039561  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:59:33.039584  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:59:33.039588  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:59:33.039598  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:59:33.039620  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:59:33.039625  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:59:33.039628  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:59:33.039634  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:59:33.039636  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:59:33.039652  488746 retry.go:31] will retry after 23.196447507s: missing components: kube-dns
	I0317 12:59:56.242632  488746 system_pods.go:86] 8 kube-system pods found
	I0317 12:59:56.242655  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 12:59:56.242662  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 12:59:56.242667  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 12:59:56.242670  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 12:59:56.242674  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 12:59:56.242676  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 12:59:56.242678  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 12:59:56.242680  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 12:59:56.242696  488746 retry.go:31] will retry after 31.614250131s: missing components: kube-dns
	I0317 13:00:27.862427  488746 system_pods.go:86] 8 kube-system pods found
	I0317 13:00:27.862451  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:00:27.862457  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 13:00:27.862465  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 13:00:27.862468  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 13:00:27.862472  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 13:00:27.862475  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 13:00:27.862477  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 13:00:27.862479  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 13:00:27.862493  488746 retry.go:31] will retry after 27.681425355s: missing components: kube-dns
	I0317 13:00:55.550042  488746 system_pods.go:86] 8 kube-system pods found
	I0317 13:00:55.550071  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:00:55.550077  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 13:00:55.550085  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 13:00:55.550088  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 13:00:55.550091  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 13:00:55.550093  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 13:00:55.550095  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 13:00:55.550098  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 13:00:55.550116  488746 retry.go:31] will retry after 34.684452897s: missing components: kube-dns
	I0317 13:01:30.240762  488746 system_pods.go:86] 8 kube-system pods found
	I0317 13:01:30.240790  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:01:30.240796  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 13:01:30.240802  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 13:01:30.240805  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 13:01:30.240809  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 13:01:30.240811  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 13:01:30.240813  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 13:01:30.240816  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 13:01:30.240832  488746 retry.go:31] will retry after 59.085152578s: missing components: kube-dns
	I0317 13:02:29.330720  488746 system_pods.go:86] 8 kube-system pods found
	I0317 13:02:29.330746  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:02:29.330753  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 13:02:29.330759  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 13:02:29.330762  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 13:02:29.330767  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 13:02:29.330769  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 13:02:29.330771  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 13:02:29.330773  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 13:02:29.330789  488746 retry.go:31] will retry after 1m7.507960078s: missing components: kube-dns
	I0317 13:03:36.846675  488746 system_pods.go:86] 8 kube-system pods found
	I0317 13:03:36.846702  488746 system_pods.go:89] "coredns-668d6bf9bc-r9f6m" [1cc533f1-1070-466a-a841-26719215cdf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0317 13:03:36.846710  488746 system_pods.go:89] "etcd-functional-207072" [058b4dd7-5a38-4a0f-bd0f-0173db6848f4] Running
	I0317 13:03:36.846717  488746 system_pods.go:89] "kindnet-2cglc" [57fc66b5-f0c5-4b2c-be5e-84dae74d095a] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0317 13:03:36.846719  488746 system_pods.go:89] "kube-apiserver-functional-207072" [a7ad403e-da46-491e-9a47-3beba74e9b40] Running
	I0317 13:03:36.846724  488746 system_pods.go:89] "kube-controller-manager-functional-207072" [af68bdb2-c8ca-441b-83b4-168c0a861250] Running
	I0317 13:03:36.846727  488746 system_pods.go:89] "kube-proxy-z27vj" [4a8c783c-4127-42b1-a992-b717b93a5f6a] Running
	I0317 13:03:36.846729  488746 system_pods.go:89] "kube-scheduler-functional-207072" [455c0735-3a97-4888-8e69-c598cbe8f152] Running
	I0317 13:03:36.846731  488746 system_pods.go:89] "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
	I0317 13:03:36.848776  488746 out.go:201] 
	W0317 13:03:36.850414  488746 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0317 13:03:36.850446  488746 out.go:270] * 
	W0317 13:03:36.851347  488746 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0317 13:03:36.852757  488746 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8207e7b4b51f8       6e38f40d628db       9 minutes ago       Running             storage-provisioner       0                   6f350da60d737       storage-provisioner
	1c1c6a0e1743a       f1332858868e1       9 minutes ago       Running             kube-proxy                0                   884c7b90bd2d9       kube-proxy-z27vj
	d8a53c3a1379d       85b7a174738ba       9 minutes ago       Running             kube-apiserver            0                   5a643bbfa28c1       kube-apiserver-functional-207072
	7236cccde01fc       a9e7e6b294baf       9 minutes ago       Running             etcd                      0                   40dfb3520b97b       etcd-functional-207072
	106a2d079f142       d8e673e7c9983       9 minutes ago       Running             kube-scheduler            0                   3de353163594b       kube-scheduler-functional-207072
	bfc0e14975938       b6a454c5a800d       9 minutes ago       Running             kube-controller-manager   0                   bc597351928b9       kube-controller-manager-functional-207072
	
	
	==> containerd <==
	Mar 17 13:01:00 functional-207072 containerd[883]: time="2025-03-17T13:01:00.473662473Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d0abdb78e18ac6dc70d8f1f53c62a0f87f634ab2ecac5431e2b4584ab6af59a\": failed to find network info for sandbox \"0d0abdb78e18ac6dc70d8f1f53c62a0f87f634ab2ecac5431e2b4584ab6af59a\""
	Mar 17 13:01:13 functional-207072 containerd[883]: time="2025-03-17T13:01:13.451664269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:01:13 functional-207072 containerd[883]: time="2025-03-17T13:01:13.476405577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e83385b03c3cc537f7d0ad744bc6ae1ac771d138f6671079b7cc08c75bfa25cf\": failed to find network info for sandbox \"e83385b03c3cc537f7d0ad744bc6ae1ac771d138f6671079b7cc08c75bfa25cf\""
	Mar 17 13:01:27 functional-207072 containerd[883]: time="2025-03-17T13:01:27.451447475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:01:27 functional-207072 containerd[883]: time="2025-03-17T13:01:27.474423269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6989bc137dbe244ad461b7239a37db09ef199430d9d62aece15a2b7d80f352c9\": failed to find network info for sandbox \"6989bc137dbe244ad461b7239a37db09ef199430d9d62aece15a2b7d80f352c9\""
	Mar 17 13:01:41 functional-207072 containerd[883]: time="2025-03-17T13:01:41.450884220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:01:41 functional-207072 containerd[883]: time="2025-03-17T13:01:41.473504080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cf993b9dc0c3d2ae8c93ff7d5ebff8f875c6fe79e4396af004af582376e8007\": failed to find network info for sandbox \"1cf993b9dc0c3d2ae8c93ff7d5ebff8f875c6fe79e4396af004af582376e8007\""
	Mar 17 13:01:56 functional-207072 containerd[883]: time="2025-03-17T13:01:56.451523238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:01:56 functional-207072 containerd[883]: time="2025-03-17T13:01:56.474069841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73469bfdeb0a0ad23110125dedfa0197d065af65e3a5e15d5c1f452f177b8bbe\": failed to find network info for sandbox \"73469bfdeb0a0ad23110125dedfa0197d065af65e3a5e15d5c1f452f177b8bbe\""
	Mar 17 13:02:09 functional-207072 containerd[883]: time="2025-03-17T13:02:09.451717521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:02:09 functional-207072 containerd[883]: time="2025-03-17T13:02:09.474470441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9c8187d563ddaed545033512818befcaa20b80f03e3e4c8a3c1435cdb94573e\": failed to find network info for sandbox \"d9c8187d563ddaed545033512818befcaa20b80f03e3e4c8a3c1435cdb94573e\""
	Mar 17 13:02:21 functional-207072 containerd[883]: time="2025-03-17T13:02:21.451358258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:02:21 functional-207072 containerd[883]: time="2025-03-17T13:02:21.473843991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f13ec9186f25bc77b76712f0055fcff6b23cc565d655ea7ceedae6c9a775282d\": failed to find network info for sandbox \"f13ec9186f25bc77b76712f0055fcff6b23cc565d655ea7ceedae6c9a775282d\""
	Mar 17 13:02:33 functional-207072 containerd[883]: time="2025-03-17T13:02:33.453932094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:02:33 functional-207072 containerd[883]: time="2025-03-17T13:02:33.478161645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"523413f8215086d62e00a4fd180c3fce185c040069986a105a133a373ee0f4f4\": failed to find network info for sandbox \"523413f8215086d62e00a4fd180c3fce185c040069986a105a133a373ee0f4f4\""
	Mar 17 13:02:44 functional-207072 containerd[883]: time="2025-03-17T13:02:44.450926299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:02:44 functional-207072 containerd[883]: time="2025-03-17T13:02:44.471839135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\": failed to find network info for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\""
	Mar 17 13:02:58 functional-207072 containerd[883]: time="2025-03-17T13:02:58.450810496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:02:58 functional-207072 containerd[883]: time="2025-03-17T13:02:58.474602984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\": failed to find network info for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\""
	Mar 17 13:03:12 functional-207072 containerd[883]: time="2025-03-17T13:03:12.451782013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:03:12 functional-207072 containerd[883]: time="2025-03-17T13:03:12.476857103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\": failed to find network info for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\""
	Mar 17 13:03:23 functional-207072 containerd[883]: time="2025-03-17T13:03:23.451247735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:03:23 functional-207072 containerd[883]: time="2025-03-17T13:03:23.474294778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\": failed to find network info for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\""
	Mar 17 13:03:36 functional-207072 containerd[883]: time="2025-03-17T13:03:36.451495620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,}"
	Mar 17 13:03:36 functional-207072 containerd[883]: time="2025-03-17T13:03:36.476808477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r9f6m,Uid:1cc533f1-1070-466a-a841-26719215cdf3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\": failed to find network info for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\""
	
	
	==> describe nodes <==
	Name:               functional-207072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-207072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=functional-207072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T12_53_48_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:53:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-207072
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 13:03:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 13:02:46 +0000   Mon, 17 Mar 2025 12:53:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 13:02:46 +0000   Mon, 17 Mar 2025 12:53:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 13:02:46 +0000   Mon, 17 Mar 2025 12:53:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 13:02:46 +0000   Mon, 17 Mar 2025 12:53:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-207072
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb8d3f267a4e49c4ae52015c5f6076dc
	  System UUID:                9a307ec7-4bd9-49c0-af54-ef74d833d8b0
	  Boot ID:                    40219139-515e-4d1c-86e4-bab1900bd49a
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-r9f6m                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m45s
	  kube-system                 etcd-functional-207072                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m50s
	  kube-system                 kindnet-2cglc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m45s
	  kube-system                 kube-apiserver-functional-207072             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 kube-controller-manager-functional-207072    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 kube-proxy-z27vj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 kube-scheduler-functional-207072             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m44s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 9m56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m56s (x8 over 9m56s)  kubelet          Node functional-207072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m56s (x8 over 9m56s)  kubelet          Node functional-207072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m56s (x7 over 9m56s)  kubelet          Node functional-207072 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal   Starting                 9m50s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m50s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m50s                  kubelet          Node functional-207072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m50s                  kubelet          Node functional-207072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m50s                  kubelet          Node functional-207072 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m46s                  node-controller  Node functional-207072 event: Registered Node functional-207072 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +2.171804] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000008] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000005] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000004] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.047810] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000009] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000001] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000011] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000008] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[Mar17 12:32] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000007] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.043860] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000003] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	
	
	==> etcd [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0] <==
	{"level":"info","ts":"2025-03-17T12:53:42.577776Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T12:53:42.577915Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T12:53:42.578164Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T12:53:42.578534Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-17T12:53:42.578691Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T12:53:43.265980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-03-17T12:53:43.266039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-03-17T12:53:43.266056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-03-17T12:53:43.266105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-03-17T12:53:43.266124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-03-17T12:53:43.266136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-03-17T12:53:43.266150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-03-17T12:53:43.267220Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:53:43.267944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T12:53:43.267943Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-207072 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T12:53:43.267981Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T12:53:43.268235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:53:43.268372Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:53:43.268405Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:53:43.268919Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:53:43.268919Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:53:43.269693Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-03-17T12:53:43.269997Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T12:53:43.270062Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T12:53:43.270092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 13:03:38 up  2:45,  0 users,  load average: 0.39, 0.30, 0.82
	Linux functional-207072 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [d8a53c3a1379dd78336879c636c679a81407512cab39e101597e5ea5b8cdbb10] <==
	I0317 12:53:45.044766       1 aggregator.go:171] initial CRD sync complete...
	I0317 12:53:45.044774       1 autoregister_controller.go:144] Starting autoregister controller
	I0317 12:53:45.044869       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0317 12:53:45.044882       1 cache.go:39] Caches are synced for autoregister controller
	I0317 12:53:45.044900       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0317 12:53:45.046526       1 controller.go:615] quota admission added evaluator for: namespaces
	I0317 12:53:45.047178       1 shared_informer.go:320] Caches are synced for configmaps
	I0317 12:53:45.047642       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0317 12:53:45.047670       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0317 12:53:45.214300       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 12:53:45.850948       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0317 12:53:45.855361       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0317 12:53:45.855387       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0317 12:53:46.457868       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 12:53:46.517808       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 12:53:46.661335       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0317 12:53:46.669813       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0317 12:53:46.671171       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 12:53:46.676830       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 12:53:46.907326       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 12:53:47.562539       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 12:53:47.575244       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0317 12:53:47.586357       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 12:53:52.408891       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0317 12:53:52.458464       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5] <==
	I0317 12:53:51.461456       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0317 12:53:51.461506       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0317 12:53:51.461550       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0317 12:53:51.461601       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0317 12:53:51.461611       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0317 12:53:51.466803       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 12:53:51.468208       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-207072" podCIDRs=["10.244.0.0/24"]
	I0317 12:53:51.468247       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:53:51.468303       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:53:51.469421       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 12:53:52.212549       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:53:52.577381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.43433ms"
	I0317 12:53:52.583706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.253096ms"
	I0317 12:53:52.583862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="105.308µs"
	I0317 12:53:52.583941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="37.478µs"
	I0317 12:53:52.591939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.032µs"
	I0317 12:53:53.570746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.272655ms"
	I0317 12:53:53.579999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="9.213074ms"
	I0317 12:53:53.580079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.953µs"
	I0317 12:53:54.574124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.985µs"
	I0317 12:53:54.579849       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.071µs"
	I0317 12:53:54.584585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="93.591µs"
	I0317 12:53:57.956293       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:57:42.320529       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 13:02:46.997644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	
	
	==> kube-proxy [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf] <==
	I0317 12:53:53.549362       1 server_linux.go:66] "Using iptables proxy"
	I0317 12:53:53.737121       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0317 12:53:53.737193       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 12:53:53.766142       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 12:53:53.766218       1 server_linux.go:170] "Using iptables Proxier"
	I0317 12:53:53.768915       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 12:53:53.769478       1 server.go:497] "Version info" version="v1.32.2"
	I0317 12:53:53.769518       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 12:53:53.771347       1 config.go:199] "Starting service config controller"
	I0317 12:53:53.771402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 12:53:53.771357       1 config.go:105] "Starting endpoint slice config controller"
	I0317 12:53:53.771524       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 12:53:53.771861       1 config.go:329] "Starting node config controller"
	I0317 12:53:53.771895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 12:53:53.871636       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 12:53:53.871640       1 shared_informer.go:320] Caches are synced for service config
	I0317 12:53:53.872251       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe] <==
	W0317 12:53:45.044691       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:45.044767       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.044816       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:45.044866       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.821699       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:45.821748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.910948       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 12:53:45.910999       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.959717       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 12:53:45.959764       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.977081       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:45.977131       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.022739       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:46.022784       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.043409       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 12:53:46.043454       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.062136       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:46.062196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.081352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 12:53:46.081427       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.102531       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:53:46.102595       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.400396       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 12:53:46.400456       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0317 12:53:49.074698       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 13:02:33 functional-207072 kubelet[1656]: E0317 13:02:33.478621    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"523413f8215086d62e00a4fd180c3fce185c040069986a105a133a373ee0f4f4\\\": failed to find network info for sandbox \\\"523413f8215086d62e00a4fd180c3fce185c040069986a105a133a373ee0f4f4\\\"\"" pod="kube-system/coredns-668d6bf9bc-r9f6m" podUID="1cc533f1-1070-466a-a841-26719215cdf3"
	Mar 17 13:02:43 functional-207072 kubelet[1656]: E0317 13:02:43.451564    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-2cglc" podUID="57fc66b5-f0c5-4b2c-be5e-84dae74d095a"
	Mar 17 13:02:44 functional-207072 kubelet[1656]: E0317 13:02:44.472128    1656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\": failed to find network info for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\""
	Mar 17 13:02:44 functional-207072 kubelet[1656]: E0317 13:02:44.472230    1656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\": failed to find network info for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:02:44 functional-207072 kubelet[1656]: E0317 13:02:44.472276    1656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\": failed to find network info for sandbox \"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:02:44 functional-207072 kubelet[1656]: E0317 13:02:44.472360    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\\\": failed to find network info for sandbox \\\"6f90a74342b223fbe9bd49cdca9140faddc22974abf8c025f67feb9ce289163b\\\"\"" pod="kube-system/coredns-668d6bf9bc-r9f6m" podUID="1cc533f1-1070-466a-a841-26719215cdf3"
	Mar 17 13:02:58 functional-207072 kubelet[1656]: E0317 13:02:58.451169    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-2cglc" podUID="57fc66b5-f0c5-4b2c-be5e-84dae74d095a"
	Mar 17 13:02:58 functional-207072 kubelet[1656]: E0317 13:02:58.474885    1656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\": failed to find network info for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\""
	Mar 17 13:02:58 functional-207072 kubelet[1656]: E0317 13:02:58.474975    1656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\": failed to find network info for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:02:58 functional-207072 kubelet[1656]: E0317 13:02:58.475004    1656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\": failed to find network info for sandbox \"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:02:58 functional-207072 kubelet[1656]: E0317 13:02:58.475069    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\\\": failed to find network info for sandbox \\\"c37d73cfc8d68acf40b7b1eeca315af03e7b75d51a99faa9e3a822e0d8f14866\\\"\"" pod="kube-system/coredns-668d6bf9bc-r9f6m" podUID="1cc533f1-1070-466a-a841-26719215cdf3"
	Mar 17 13:03:11 functional-207072 kubelet[1656]: E0317 13:03:11.451461    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-2cglc" podUID="57fc66b5-f0c5-4b2c-be5e-84dae74d095a"
	Mar 17 13:03:12 functional-207072 kubelet[1656]: E0317 13:03:12.477192    1656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\": failed to find network info for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\""
	Mar 17 13:03:12 functional-207072 kubelet[1656]: E0317 13:03:12.477277    1656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\": failed to find network info for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:03:12 functional-207072 kubelet[1656]: E0317 13:03:12.477309    1656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\": failed to find network info for sandbox \"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:03:12 functional-207072 kubelet[1656]: E0317 13:03:12.477365    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\\\": failed to find network info for sandbox \\\"73f2e855aebdedc2d14284510240808de2eac47340a0a3ed09f06845ed858cfa\\\"\"" pod="kube-system/coredns-668d6bf9bc-r9f6m" podUID="1cc533f1-1070-466a-a841-26719215cdf3"
	Mar 17 13:03:23 functional-207072 kubelet[1656]: E0317 13:03:23.474621    1656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\": failed to find network info for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\""
	Mar 17 13:03:23 functional-207072 kubelet[1656]: E0317 13:03:23.474706    1656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\": failed to find network info for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:03:23 functional-207072 kubelet[1656]: E0317 13:03:23.474731    1656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\": failed to find network info for sandbox \"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:03:23 functional-207072 kubelet[1656]: E0317 13:03:23.474780    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\\\": failed to find network info for sandbox \\\"ad4a9446d92b3532284fe6aecbb1f97e3c000e0d08f3c108fdcfdcbefdaf36db\\\"\"" pod="kube-system/coredns-668d6bf9bc-r9f6m" podUID="1cc533f1-1070-466a-a841-26719215cdf3"
	Mar 17 13:03:26 functional-207072 kubelet[1656]: E0317 13:03:26.451978    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kindest/kindnetd:v20250214-acbabc1a\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kindest/kindnetd/manifests/sha256:f3108bcefe4c9797081f9b4405e510eaec07ff17b8224077b3bad839452ebc97: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kube-system/kindnet-2cglc" podUID="57fc66b5-f0c5-4b2c-be5e-84dae74d095a"
	Mar 17 13:03:36 functional-207072 kubelet[1656]: E0317 13:03:36.477088    1656 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\": failed to find network info for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\""
	Mar 17 13:03:36 functional-207072 kubelet[1656]: E0317 13:03:36.477174    1656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\": failed to find network info for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:03:36 functional-207072 kubelet[1656]: E0317 13:03:36.477204    1656 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\": failed to find network info for sandbox \"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\"" pod="kube-system/coredns-668d6bf9bc-r9f6m"
	Mar 17 13:03:36 functional-207072 kubelet[1656]: E0317 13:03:36.477277    1656 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r9f6m_kube-system(1cc533f1-1070-466a-a841-26719215cdf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\\\": failed to find network info for sandbox \\\"ede6a87690478da501c0f32bd6d3715b0d9a9cc316e60acdbf6db73b79324f4a\\\"\"" pod="kube-system/coredns-668d6bf9bc-r9f6m" podUID="1cc533f1-1070-466a-a841-26719215cdf3"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-207072 -n functional-207072
helpers_test.go:261: (dbg) Run:  kubectl --context functional-207072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-r9f6m kindnet-2cglc
helpers_test.go:274: ======> post-mortem[TestFunctional/serial/StartWithProxy]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-207072 describe pod coredns-668d6bf9bc-r9f6m kindnet-2cglc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-207072 describe pod coredns-668d6bf9bc-r9f6m kindnet-2cglc: exit status 1 (73.001875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-r9f6m" not found
	Error from server (NotFound): pods "kindnet-2cglc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-207072 describe pod coredns-668d6bf9bc-r9f6m kindnet-2cglc: exit status 1
--- FAIL: TestFunctional/serial/StartWithProxy (610.85s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [378d0023-c3fe-4fd0-bd09-c0a8d2247885] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003964448s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-207072 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-207072 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-207072 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-207072 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8cde6b70-9dd4-4bce-94b9-ee53ce708d8a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-207072 -n functional-207072
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-03-17 13:09:18.144113847 +0000 UTC m=+1862.395963464
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-207072 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-207072 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-207072/192.168.49.2
Start Time:       Mon, 17 Mar 2025 13:06:17 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm7f4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-xm7f4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-207072
Warning  Failed     2m8s (x3 over 2m58s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    78s (x4 over 3m)      kubelet            Pulling image "docker.io/nginx"
Warning  Failed     75s (x4 over 2m58s)   kubelet            Error: ErrImagePull
Warning  Failed     75s                   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    13s (x10 over 2m57s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     13s (x10 over 2m57s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-207072 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-207072 logs sp-pod -n default: exit status 1 (72.592902ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-207072 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-207072
helpers_test.go:235: (dbg) docker inspect functional-207072:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd",
	        "Created": "2025-03-17T12:53:33.435306722Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 489261,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-03-17T12:53:33.472272287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b0734d4b8a5a2dbe50c35bd8745d33dc9ec48b1b1af7ad72f6736a52b01c8ce5",
	        "ResolvConfPath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/hostname",
	        "HostsPath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/hosts",
	        "LogPath": "/var/lib/docker/containers/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd/99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd-json.log",
	        "Name": "/functional-207072",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-207072:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-207072",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99a40b3312292d4892b26f754c194969f51681949fc8436ef02fe22fc8b70ecd",
	                "LowerDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38-init/diff:/var/lib/docker/overlay2/0d1b72eeaeef000e911d7896b151fb0d0a984c18eeb180d19223ea8ba67fdac6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62115435c54ec0390f68168f517c032728d52e081274117a610b81dd3e83fb38/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-207072",
	                "Source": "/var/lib/docker/volumes/functional-207072/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-207072",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-207072",
	                "name.minikube.sigs.k8s.io": "functional-207072",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3bb7b68ac4db848252444f20903ebb70f2bef8eac96fb998aca641befa7612a8",
	            "SandboxKey": "/var/run/docker/netns/3bb7b68ac4db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-207072": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:8e:4c:06:9e:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "56d5d739092975ce17103d292d843574d96362dda269224b5acf5c20e29ff743",
	                    "EndpointID": "d8b88d8d0483f3ebc26d00515fb43351ca250c91b1d79d3fa56d6b89016e4a3b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-207072",
	                        "99a40b331229"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-207072 -n functional-207072
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-207072 logs -n 25: (1.551697163s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                       Args                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-207072 ssh findmnt                                                    | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | -T /mount1                                                                       |                   |         |         |                     |                     |
	| ssh            | functional-207072 ssh findmnt                                                    | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | -T /mount2                                                                       |                   |         |         |                     |                     |
	| ssh            | functional-207072 ssh findmnt                                                    | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | -T /mount3                                                                       |                   |         |         |                     |                     |
	| mount          | -p functional-207072                                                             | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC |                     |
	|                | --kill=true                                                                      |                   |         |         |                     |                     |
	| ssh            | functional-207072 ssh sudo                                                       | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC |                     |
	|                | systemctl is-active docker                                                       |                   |         |         |                     |                     |
	| ssh            | functional-207072 ssh sudo                                                       | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC |                     |
	|                | systemctl is-active crio                                                         |                   |         |         |                     |                     |
	| image          | functional-207072 image load --daemon                                            | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | kicbase/echo-server:functional-207072                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-207072 image ls                                                       | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	| image          | functional-207072 image load --daemon                                            | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | kicbase/echo-server:functional-207072                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-207072 image ls                                                       | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	| image          | functional-207072 image save kicbase/echo-server:functional-207072               | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-207072 image rm                                                       | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | kicbase/echo-server:functional-207072                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-207072 image ls                                                       | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	| ssh            | functional-207072 ssh sudo cat                                                   | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | /etc/test/nested/copy/453732/hosts                                               |                   |         |         |                     |                     |
	| image          | functional-207072 image load                                                     | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| update-context | functional-207072                                                                | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	| update-context | functional-207072                                                                | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	| update-context | functional-207072                                                                | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	| image          | functional-207072                                                                | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | image ls --format short                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-207072                                                                | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | image ls --format yaml                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh            | functional-207072 ssh pgrep                                                      | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC |                     |
	|                | buildkitd                                                                        |                   |         |         |                     |                     |
	| image          | functional-207072 image build -t                                                 | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | localhost/my-image:functional-207072                                             |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                 |                   |         |         |                     |                     |
	| image          | functional-207072 image ls                                                       | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	| image          | functional-207072                                                                | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | image ls --format json                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-207072                                                                | functional-207072 | jenkins | v1.35.0 | 17 Mar 25 13:06 UTC | 17 Mar 25 13:06 UTC |
	|                | image ls --format table                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 13:06:26
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 13:06:26.462779  507060 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:06:26.463070  507060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:26.463080  507060 out.go:358] Setting ErrFile to fd 2...
	I0317 13:06:26.463085  507060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:26.463316  507060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:06:26.463930  507060 out.go:352] Setting JSON to false
	I0317 13:06:26.465195  507060 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10127,"bootTime":1742206660,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:06:26.465282  507060 start.go:139] virtualization: kvm guest
	I0317 13:06:26.467425  507060 out.go:177] * [functional-207072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:06:26.469198  507060 notify.go:220] Checking for updates...
	I0317 13:06:26.469229  507060 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:06:26.471091  507060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:06:26.472724  507060 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 13:06:26.474247  507060 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 13:06:26.475645  507060 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:06:26.477128  507060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:06:26.479171  507060 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:06:26.479867  507060 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:06:26.503663  507060 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:06:26.503806  507060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:06:26.558175  507060 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 13:06:26.546703543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 13:06:26.558466  507060 docker.go:318] overlay module found
	I0317 13:06:26.561017  507060 out.go:177] * Using the docker driver based on existing profile
	I0317 13:06:26.562563  507060 start.go:297] selected driver: docker
	I0317 13:06:26.562588  507060 start.go:901] validating driver "docker" against &{Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:06:26.562728  507060 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:06:26.562858  507060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:06:26.620264  507060 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 13:06:26.608755617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 13:06:26.620996  507060 cni.go:84] Creating CNI manager for ""
	I0317 13:06:26.621069  507060 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 13:06:26.621113  507060 start.go:340] cluster config:
	{Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:06:26.623339  507060 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	481ba5310a210       115053965e86b       2 minutes ago       Running             dashboard-metrics-scraper   0                   7789f5f9bb89c       dashboard-metrics-scraper-5d59dccf9b-ffgg9
	45e66f4ad01a8       07655ddf2eebe       2 minutes ago       Running             kubernetes-dashboard        0                   9ebdd4ae0c661       kubernetes-dashboard-7779f9b69b-nltmq
	bb6baa3d83c4e       56cc512116c8f       2 minutes ago       Exited              mount-munger                0                   da3072fede396       busybox-mount
	dcc8f903e4819       82e4c8a736a4f       3 minutes ago       Running             echoserver                  0                   273bb5a4f5e50       hello-node-fcfd88b6f-484zz
	67967761a3a34       82e4c8a736a4f       3 minutes ago       Running             echoserver                  0                   7251d90115cc8       hello-node-connect-58f9cf68d8-l2sm7
	50a043fc60ea9       6e38f40d628db       3 minutes ago       Running             storage-provisioner         2                   6f350da60d737       storage-provisioner
	b93e0f07793a1       85b7a174738ba       3 minutes ago       Running             kube-apiserver              0                   ae6e547714661       kube-apiserver-functional-207072
	10bd85a0de0da       b6a454c5a800d       3 minutes ago       Running             kube-controller-manager     1                   bc597351928b9       kube-controller-manager-functional-207072
	3bf3a2bd985b0       d8e673e7c9983       3 minutes ago       Running             kube-scheduler              1                   3de353163594b       kube-scheduler-functional-207072
	e1bc3f2431027       a9e7e6b294baf       3 minutes ago       Running             etcd                        1                   40dfb3520b97b       etcd-functional-207072
	8546ad5adad4a       c69fa2e9cbf5f       3 minutes ago       Running             coredns                     1                   fc264c1b14c23       coredns-668d6bf9bc-r9f6m
	4a184fe68ff16       f1332858868e1       3 minutes ago       Running             kube-proxy                  1                   884c7b90bd2d9       kube-proxy-z27vj
	7a8d4c90bba03       6e38f40d628db       3 minutes ago       Exited              storage-provisioner         1                   6f350da60d737       storage-provisioner
	b61ce129c1321       df3849d954c98       3 minutes ago       Running             kindnet-cni                 1                   a4de5fd10b417       kindnet-2cglc
	f1aa6e566f887       c69fa2e9cbf5f       4 minutes ago       Exited              coredns                     0                   fc264c1b14c23       coredns-668d6bf9bc-r9f6m
	1cfb186c112bd       df3849d954c98       4 minutes ago       Exited              kindnet-cni                 0                   a4de5fd10b417       kindnet-2cglc
	1c1c6a0e1743a       f1332858868e1       15 minutes ago      Exited              kube-proxy                  0                   884c7b90bd2d9       kube-proxy-z27vj
	7236cccde01fc       a9e7e6b294baf       15 minutes ago      Exited              etcd                        0                   40dfb3520b97b       etcd-functional-207072
	106a2d079f142       d8e673e7c9983       15 minutes ago      Exited              kube-scheduler              0                   3de353163594b       kube-scheduler-functional-207072
	bfc0e14975938       b6a454c5a800d       15 minutes ago      Exited              kube-controller-manager     0                   bc597351928b9       kube-controller-manager-functional-207072
	
	
	==> containerd <==
	Mar 17 13:07:30 functional-207072 containerd[6629]: time="2025-03-17T13:07:30.071616346Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Mar 17 13:07:30 functional-207072 containerd[6629]: time="2025-03-17T13:07:30.073606767Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:07:30 functional-207072 containerd[6629]: time="2025-03-17T13:07:30.743534473Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:07:32 functional-207072 containerd[6629]: time="2025-03-17T13:07:32.603136777Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 13:07:32 functional-207072 containerd[6629]: time="2025-03-17T13:07:32.603194887Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10967"
	Mar 17 13:07:41 functional-207072 containerd[6629]: time="2025-03-17T13:07:41.071992288Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Mar 17 13:07:41 functional-207072 containerd[6629]: time="2025-03-17T13:07:41.073906041Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:07:41 functional-207072 containerd[6629]: time="2025-03-17T13:07:41.756881483Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:07:43 functional-207072 containerd[6629]: time="2025-03-17T13:07:43.623690717Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 13:07:43 functional-207072 containerd[6629]: time="2025-03-17T13:07:43.623784395Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Mar 17 13:08:00 functional-207072 containerd[6629]: time="2025-03-17T13:08:00.071430686Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Mar 17 13:08:00 functional-207072 containerd[6629]: time="2025-03-17T13:08:00.073341411Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:08:00 functional-207072 containerd[6629]: time="2025-03-17T13:08:00.747710617Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:08:03 functional-207072 containerd[6629]: time="2025-03-17T13:08:03.003394613Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 13:08:03 functional-207072 containerd[6629]: time="2025-03-17T13:08:03.003462639Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21239"
	Mar 17 13:08:13 functional-207072 containerd[6629]: time="2025-03-17T13:08:13.071962983Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Mar 17 13:08:13 functional-207072 containerd[6629]: time="2025-03-17T13:08:13.074175757Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:08:13 functional-207072 containerd[6629]: time="2025-03-17T13:08:13.778388426Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:08:15 functional-207072 containerd[6629]: time="2025-03-17T13:08:15.644800080Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 13:08:15 functional-207072 containerd[6629]: time="2025-03-17T13:08:15.644861455Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=10966"
	Mar 17 13:09:07 functional-207072 containerd[6629]: time="2025-03-17T13:09:07.071460533Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Mar 17 13:09:07 functional-207072 containerd[6629]: time="2025-03-17T13:09:07.073484207Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:09:07 functional-207072 containerd[6629]: time="2025-03-17T13:09:07.772833900Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 17 13:09:09 functional-207072 containerd[6629]: time="2025-03-17T13:09:09.642108956Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Mar 17 13:09:09 functional-207072 containerd[6629]: time="2025-03-17T13:09:09.642176230Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	
	
	==> coredns [8546ad5adad4a355471b89f708039e53fe89631c17b224c8f0fcbe73d515152f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55440 - 6618 "HINFO IN 7739078152220503757.5332019638741904854. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018572779s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f1aa6e566f88769b410d1b80ef13fc0a251e79c5a3fa32b8179d32d3fc263240] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47123 - 38609 "HINFO IN 3421226537138955410.3716377107049692027. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009880429s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-207072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-207072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
	                    minikube.k8s.io/name=functional-207072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_03_17T12_53_48_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Mar 2025 12:53:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-207072
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Mar 2025 13:09:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Mar 2025 13:07:19 +0000   Mon, 17 Mar 2025 12:53:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Mar 2025 13:07:19 +0000   Mon, 17 Mar 2025 12:53:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Mar 2025 13:07:19 +0000   Mon, 17 Mar 2025 12:53:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Mar 2025 13:07:19 +0000   Mon, 17 Mar 2025 12:53:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-207072
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb8d3f267a4e49c4ae52015c5f6076dc
	  System UUID:                9a307ec7-4bd9-49c0-af54-ef74d833d8b0
	  Boot ID:                    40219139-515e-4d1c-86e4-bab1900bd49a
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.25
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-l2sm7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-node-fcfd88b6f-484zz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     mysql-58ccfd96bb-qdpnr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     2m37s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-r9f6m                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 etcd-functional-207072                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-2cglc                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-functional-207072              250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kube-controller-manager-functional-207072     200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-z27vj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-functional-207072              100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-ffgg9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-nltmq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   Starting                 3m29s                  kube-proxy       
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node functional-207072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node functional-207072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node functional-207072 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 15m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node functional-207072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node functional-207072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node functional-207072 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           15m                    node-controller  Node functional-207072 event: Registered Node functional-207072 in Controller
	  Normal   Starting                 3m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m35s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m35s (x8 over 3m35s)  kubelet          Node functional-207072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m35s (x8 over 3m35s)  kubelet          Node functional-207072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m35s (x7 over 3m35s)  kubelet          Node functional-207072 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m29s                  node-controller  Node functional-207072 event: Registered Node functional-207072 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +2.171804] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000008] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000005] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000004] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.047810] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000009] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000001] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000011] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000008] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[Mar17 12:32] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-73d8e31699ad
	[  +0.000007] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +0.000000] ll header: 00000000: fe 1e 18 a3 e2 c7 92 c3 26 97 de 39 08 00
	[  +2.043860] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000003] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ab6f81b436d8
	[  +0.000002] ll header: 00000000: ae e5 ff 7d 16 ad 72 10 f9 69 65 e0 08 00
	
	
	==> etcd [7236cccde01fce52bdabf85aaa55ed8300adab68f6e52cdc30d947485cc7e3e0] <==
	{"level":"info","ts":"2025-03-17T12:53:43.267944Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T12:53:43.267943Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-207072 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T12:53:43.267981Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T12:53:43.268235Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:53:43.268372Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:53:43.268405Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T12:53:43.268919Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:53:43.268919Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T12:53:43.269693Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-03-17T12:53:43.269997Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T12:53:43.270062Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T12:53:43.270092Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:03:43.603585Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":742}
	{"level":"info","ts":"2025-03-17T13:03:43.609104Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":742,"took":"5.145633ms","hash":3241218385,"current-db-size-bytes":2134016,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2134016,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-03-17T13:03:43.609178Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":3241218385,"revision":742,"compact-revision":-1}
	{"level":"info","ts":"2025-03-17T13:05:42.638076Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-03-17T13:05:42.638189Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-207072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-03-17T13:05:42.638320Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:05:42.638363Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:05:42.639839Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-03-17T13:05:42.639877Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-03-17T13:05:42.641329Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-03-17T13:05:42.642960Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T13:05:42.643055Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T13:05:42.643072Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-207072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [e1bc3f243102798d290b461c1d7018aba829621caa4c60c6449bf20cad436190] <==
	{"level":"info","ts":"2025-03-17T13:05:44.978491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-03-17T13:05:44.978577Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-03-17T13:05:44.978705Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:05:44.978749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-03-17T13:05:44.980829Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-03-17T13:05:44.981605Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T13:05:44.981678Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-03-17T13:05:44.981642Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-03-17T13:05:44.981766Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-03-17T13:05:46.568741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-03-17T13:05:46.568836Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-03-17T13:05:46.568871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-03-17T13:05:46.568890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-03-17T13:05:46.568908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-03-17T13:05:46.568935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-03-17T13:05:46.568950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-03-17T13:05:46.570611Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:05:46.570618Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-207072 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-03-17T13:05:46.570636Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-03-17T13:05:46.570814Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-03-17T13:05:46.570844Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-03-17T13:05:46.571570Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:05:46.571848Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-03-17T13:05:46.572557Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-03-17T13:05:46.572723Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 13:09:19 up  2:51,  0 users,  load average: 0.44, 0.37, 0.69
	Linux functional-207072 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1cfb186c112bd79bc269f024cde2e27cd78ab55ec7919a891493454b6ce3d123] <==
	I0317 13:04:50.447100       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0317 13:04:50.447407       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0317 13:04:50.447575       1 main.go:148] setting mtu 1500 for CNI 
	I0317 13:04:50.447598       1 main.go:178] kindnetd IP family: "ipv4"
	I0317 13:04:50.447618       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0317 13:04:50.861622       1 controller.go:361] Starting controller kube-network-policies
	I0317 13:04:50.861643       1 controller.go:365] Waiting for informer caches to sync
	I0317 13:04:50.861651       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0317 13:04:51.144535       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0317 13:04:51.144581       1 metrics.go:61] Registering metrics
	I0317 13:04:51.144865       1 controller.go:401] Syncing nftables rules
	I0317 13:05:00.868389       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:05:00.868470       1 main.go:301] handling current node
	I0317 13:05:10.862450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:05:10.862515       1 main.go:301] handling current node
	I0317 13:05:20.861838       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:05:20.861897       1 main.go:301] handling current node
	I0317 13:05:30.870352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:05:30.870405       1 main.go:301] handling current node
	
	
	==> kindnet [b61ce129c13215499721b86dd508a04df22a936666ba6ee669c8622cb38b84c4] <==
	I0317 13:07:13.747983       1 main.go:301] handling current node
	I0317 13:07:23.746108       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:07:23.746213       1 main.go:301] handling current node
	I0317 13:07:33.747085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:07:33.747132       1 main.go:301] handling current node
	I0317 13:07:43.748456       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:07:43.748528       1 main.go:301] handling current node
	I0317 13:07:53.746576       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:07:53.746655       1 main.go:301] handling current node
	I0317 13:08:03.749092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:08:03.749142       1 main.go:301] handling current node
	I0317 13:08:13.750957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:08:13.750993       1 main.go:301] handling current node
	I0317 13:08:23.752453       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:08:23.752503       1 main.go:301] handling current node
	I0317 13:08:33.746677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:08:33.746718       1 main.go:301] handling current node
	I0317 13:08:43.752458       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:08:43.752500       1 main.go:301] handling current node
	I0317 13:08:53.747105       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:08:53.747157       1 main.go:301] handling current node
	I0317 13:09:03.748460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:09:03.748518       1 main.go:301] handling current node
	I0317 13:09:13.752512       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0317 13:09:13.752559       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b93e0f07793a1e1d74360404fe4f62cbcda615d315ebae4210df7a87f1167544] <==
	I0317 13:05:47.645081       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0317 13:05:47.645094       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0317 13:05:47.645134       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0317 13:05:47.645272       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0317 13:05:47.645329       1 policy_source.go:240] refreshing policies
	I0317 13:05:47.651378       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0317 13:05:47.662197       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0317 13:05:48.069787       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0317 13:05:48.521501       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0317 13:05:48.755794       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0317 13:05:48.757189       1 controller.go:615] quota admission added evaluator for: endpoints
	I0317 13:05:48.763330       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0317 13:05:49.515436       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0317 13:05:49.657539       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0317 13:05:49.760224       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0317 13:05:49.768958       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0317 13:05:51.172434       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0317 13:06:06.429086       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.214.44"}
	I0317 13:06:11.749600       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.239.239"}
	I0317 13:06:12.430222       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.22.24"}
	I0317 13:06:12.579666       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.145.167"}
	I0317 13:06:27.675966       1 controller.go:615] quota admission added evaluator for: namespaces
	I0317 13:06:27.879625       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.217.109"}
	I0317 13:06:27.894567       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.129.186"}
	I0317 13:06:42.287105       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.218.193"}
	
	
	==> kube-controller-manager [10bd85a0de0da499cba1ec4a606889e2e789af4628c256536ce63a38a447f282] <==
	I0317 13:06:27.779021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="18.336528ms"
	I0317 13:06:27.849953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="70.982523ms"
	I0317 13:06:27.849994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="70.809706ms"
	I0317 13:06:27.850189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="45.168µs"
	I0317 13:06:27.850189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="50.998µs"
	I0317 13:06:27.854846       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="121.703µs"
	I0317 13:06:27.861164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="55.491µs"
	I0317 13:06:37.433830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="7.025264ms"
	I0317 13:06:37.434116       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="213.138µs"
	I0317 13:06:39.439752       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.558703ms"
	I0317 13:06:39.439889       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="79.282µs"
	I0317 13:06:42.348702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="21.287223ms"
	I0317 13:06:42.353677       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="4.921585ms"
	I0317 13:06:42.353772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="61.89µs"
	I0317 13:06:42.361165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="90.552µs"
	I0317 13:06:45.459008       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="63.555µs"
	I0317 13:06:48.541639       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 13:06:59.080962       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="67.829µs"
	I0317 13:07:15.081073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="80.901µs"
	I0317 13:07:19.314719       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 13:07:30.081340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="134.864µs"
	I0317 13:07:44.081939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="91.593µs"
	I0317 13:07:59.080694       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="68.107µs"
	I0317 13:08:30.080175       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="77.41µs"
	I0317 13:08:45.080626       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="122.641µs"
	
	
	==> kube-controller-manager [bfc0e14975938d7783ff31a54a37dfcb297d7caafd4e9fe6b0676f6cbb58b9c5] <==
	I0317 12:53:51.461611       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0317 12:53:51.466803       1 shared_informer.go:320] Caches are synced for resource quota
	I0317 12:53:51.468208       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-207072" podCIDRs=["10.244.0.0/24"]
	I0317 12:53:51.468247       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:53:51.468303       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:53:51.469421       1 shared_informer.go:320] Caches are synced for garbage collector
	I0317 12:53:52.212549       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:53:52.577381       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="164.43433ms"
	I0317 12:53:52.583706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.253096ms"
	I0317 12:53:52.583862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="105.308µs"
	I0317 12:53:52.583941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="37.478µs"
	I0317 12:53:52.591939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.032µs"
	I0317 12:53:53.570746       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="20.272655ms"
	I0317 12:53:53.579999       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="9.213074ms"
	I0317 12:53:53.580079       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="48.953µs"
	I0317 12:53:54.574124       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="82.985µs"
	I0317 12:53:54.579849       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.071µs"
	I0317 12:53:54.584585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="93.591µs"
	I0317 12:53:57.956293       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 12:57:42.320529       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 13:02:46.997644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	I0317 13:05:04.889733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="90.67µs"
	I0317 13:05:04.910841       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="9.015607ms"
	I0317 13:05:04.911004       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="113.921µs"
	I0317 13:05:19.731026       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-207072"
	
	
	==> kube-proxy [1c1c6a0e1743a1c41cfe991091f02bfce9d8aa61dd12ae6d8514e191cd83b6cf] <==
	I0317 12:53:53.549362       1 server_linux.go:66] "Using iptables proxy"
	I0317 12:53:53.737121       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0317 12:53:53.737193       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 12:53:53.766142       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 12:53:53.766218       1 server_linux.go:170] "Using iptables Proxier"
	I0317 12:53:53.768915       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 12:53:53.769478       1 server.go:497] "Version info" version="v1.32.2"
	I0317 12:53:53.769518       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 12:53:53.771347       1 config.go:199] "Starting service config controller"
	I0317 12:53:53.771402       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 12:53:53.771357       1 config.go:105] "Starting endpoint slice config controller"
	I0317 12:53:53.771524       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 12:53:53.771861       1 config.go:329] "Starting node config controller"
	I0317 12:53:53.771895       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 12:53:53.871636       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 12:53:53.871640       1 shared_informer.go:320] Caches are synced for service config
	I0317 12:53:53.872251       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [4a184fe68ff16191732189506f7403089a3fec348c8c7f34bfd777bc195731e2] <==
	I0317 13:05:33.284719       1 server_linux.go:66] "Using iptables proxy"
	E0317 13:05:33.379339       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-207072\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0317 13:05:34.541485       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-207072\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0317 13:05:36.588290       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-207072\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0317 13:05:41.032756       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-207072\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0317 13:05:49.752821       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0317 13:05:49.752920       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0317 13:05:49.781338       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0317 13:05:49.781436       1 server_linux.go:170] "Using iptables Proxier"
	I0317 13:05:49.784236       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0317 13:05:49.784843       1 server.go:497] "Version info" version="v1.32.2"
	I0317 13:05:49.785026       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:05:49.790662       1 config.go:199] "Starting service config controller"
	I0317 13:05:49.790707       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0317 13:05:49.790732       1 config.go:105] "Starting endpoint slice config controller"
	I0317 13:05:49.790738       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0317 13:05:49.790883       1 config.go:329] "Starting node config controller"
	I0317 13:05:49.790894       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0317 13:05:49.891156       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0317 13:05:49.891153       1 shared_informer.go:320] Caches are synced for node config
	I0317 13:05:49.891192       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [106a2d079f142b765e4c506c6a6da8bac587d0f8ffede954f4f70d28b4232bfe] <==
	W0317 12:53:45.821699       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:45.821748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.910948       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0317 12:53:45.910999       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.959717       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0317 12:53:45.959764       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:45.977081       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:45.977131       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.022739       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:46.022784       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.043409       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0317 12:53:46.043454       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.062136       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0317 12:53:46.062196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.081352       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0317 12:53:46.081427       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.102531       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0317 12:53:46.102595       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0317 12:53:46.400396       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0317 12:53:46.400456       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0317 12:53:49.074698       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:05:42.692078       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0317 13:05:42.692301       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0317 13:05:42.692377       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0317 13:05:42.693155       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3bf3a2bd985b00b66135670618bb4ce191cf5465be4af1f6070679759080bc6f] <==
	I0317 13:05:46.020063       1 serving.go:386] Generated self-signed cert in-memory
	W0317 13:05:47.563999       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0317 13:05:47.564040       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	W0317 13:05:47.564053       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0317 13:05:47.564062       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0317 13:05:47.647246       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0317 13:05:47.647355       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0317 13:05:47.649967       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0317 13:05:47.650035       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0317 13:05:47.650389       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0317 13:05:47.650546       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0317 13:05:47.751214       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 17 13:08:03 functional-207072 kubelet[7554]: E0317 13:08:03.004032    7554 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xm7f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(8cde6b70-9dd4-4bce-94b9-ee53ce708d8a): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 13:08:03 functional-207072 kubelet[7554]: E0317 13:08:03.005381    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8cde6b70-9dd4-4bce-94b9-ee53ce708d8a"
	Mar 17 13:08:07 functional-207072 kubelet[7554]: E0317 13:08:07.071683    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f3ed54d9-4ffa-4899-bdf5-04589b2116cc"
	Mar 17 13:08:15 functional-207072 kubelet[7554]: E0317 13:08:15.645118    7554 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Mar 17 13:08:15 functional-207072 kubelet[7554]: E0317 13:08:15.645191    7554 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Mar 17 13:08:15 functional-207072 kubelet[7554]: E0317 13:08:15.645330    7554 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6npgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-qdpnr_default(7cdedd78-be76-4e31-8576-904510762777): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 13:08:15 functional-207072 kubelet[7554]: E0317 13:08:15.646514    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdpnr" podUID="7cdedd78-be76-4e31-8576-904510762777"
	Mar 17 13:08:16 functional-207072 kubelet[7554]: E0317 13:08:16.070576    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8cde6b70-9dd4-4bce-94b9-ee53ce708d8a"
	Mar 17 13:08:20 functional-207072 kubelet[7554]: E0317 13:08:20.071397    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f3ed54d9-4ffa-4899-bdf5-04589b2116cc"
	Mar 17 13:08:29 functional-207072 kubelet[7554]: E0317 13:08:29.071109    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8cde6b70-9dd4-4bce-94b9-ee53ce708d8a"
	Mar 17 13:08:30 functional-207072 kubelet[7554]: E0317 13:08:30.070714    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdpnr" podUID="7cdedd78-be76-4e31-8576-904510762777"
	Mar 17 13:08:32 functional-207072 kubelet[7554]: E0317 13:08:32.071177    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f3ed54d9-4ffa-4899-bdf5-04589b2116cc"
	Mar 17 13:08:41 functional-207072 kubelet[7554]: E0317 13:08:41.070641    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8cde6b70-9dd4-4bce-94b9-ee53ce708d8a"
	Mar 17 13:08:43 functional-207072 kubelet[7554]: E0317 13:08:43.071520    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f3ed54d9-4ffa-4899-bdf5-04589b2116cc"
	Mar 17 13:08:45 functional-207072 kubelet[7554]: E0317 13:08:45.071838    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdpnr" podUID="7cdedd78-be76-4e31-8576-904510762777"
	Mar 17 13:08:52 functional-207072 kubelet[7554]: E0317 13:08:52.070521    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8cde6b70-9dd4-4bce-94b9-ee53ce708d8a"
	Mar 17 13:08:56 functional-207072 kubelet[7554]: E0317 13:08:56.071794    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f3ed54d9-4ffa-4899-bdf5-04589b2116cc"
	Mar 17 13:08:58 functional-207072 kubelet[7554]: E0317 13:08:58.071901    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdpnr" podUID="7cdedd78-be76-4e31-8576-904510762777"
	Mar 17 13:09:05 functional-207072 kubelet[7554]: E0317 13:09:05.070844    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8cde6b70-9dd4-4bce-94b9-ee53ce708d8a"
	Mar 17 13:09:09 functional-207072 kubelet[7554]: E0317 13:09:09.642461    7554 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Mar 17 13:09:09 functional-207072 kubelet[7554]: E0317 13:09:09.642539    7554 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Mar 17 13:09:09 functional-207072 kubelet[7554]: E0317 13:09:09.642658    7554 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zfk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(f3ed54d9-4ffa-4899-bdf5-04589b2116cc): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Mar 17 13:09:09 functional-207072 kubelet[7554]: E0317 13:09:09.643924    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="f3ed54d9-4ffa-4899-bdf5-04589b2116cc"
	Mar 17 13:09:10 functional-207072 kubelet[7554]: E0317 13:09:10.071374    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdpnr" podUID="7cdedd78-be76-4e31-8576-904510762777"
	Mar 17 13:09:19 functional-207072 kubelet[7554]: E0317 13:09:19.070754    7554 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8cde6b70-9dd4-4bce-94b9-ee53ce708d8a"
	
	
	==> kubernetes-dashboard [45e66f4ad01a8ccb13173b99b5fa0e8579f9d5a4d68f4d049798542746f4c92e] <==
	2025/03/17 13:06:36 Using namespace: kubernetes-dashboard
	2025/03/17 13:06:36 Using in-cluster config to connect to apiserver
	2025/03/17 13:06:36 Using secret token for csrf signing
	2025/03/17 13:06:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/03/17 13:06:36 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/03/17 13:06:36 Successful initial request to the apiserver, version: v1.32.2
	2025/03/17 13:06:36 Generating JWE encryption key
	2025/03/17 13:06:36 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/03/17 13:06:36 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/03/17 13:06:37 Initializing JWE encryption key from synchronized object
	2025/03/17 13:06:37 Creating in-cluster Sidecar client
	2025/03/17 13:06:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/03/17 13:06:37 Serving insecurely on HTTP port: 9090
	2025/03/17 13:07:07 Successful request to sidecar
	2025/03/17 13:06:36 Starting overwatch
	
	
	==> storage-provisioner [50a043fc60ea92e740fcf205b5f5b33c72771abc016bbd33bfcfa0dad8111c9a] <==
	I0317 13:05:48.419645       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0317 13:05:48.427561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0317 13:05:48.427611       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0317 13:06:05.825786       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0317 13:06:05.825872       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8d72e3f8-c1fa-437b-acc2-bab280310a9a", APIVersion:"v1", ResourceVersion:"1188", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-207072_dc27caf9-ee09-4d03-b944-b0127054175c became leader
	I0317 13:06:05.825957       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-207072_dc27caf9-ee09-4d03-b944-b0127054175c!
	I0317 13:06:05.927039       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-207072_dc27caf9-ee09-4d03-b944-b0127054175c!
	I0317 13:06:17.625846       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0317 13:06:17.625943       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2d420033-d83e-4fcb-8867-bd1c8d0ca550 389 0 2025-03-17 12:53:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-03-17 12:53:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-c9b87bbe-63b2-406a-8c6f-19edd7040d06 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  c9b87bbe-63b2-406a-8c6f-19edd7040d06 1301 0 2025-03-17 13:06:17 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Re
adWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-03-17 13:06:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-03-17 13:06:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0317 13:06:17.626413       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-c9b87bbe-63b2-406a-8c6f-19edd7040d06" provisioned
	I0317 13:06:17.626433       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0317 13:06:17.626443       1 volume_store.go:212] Trying to save persistentvolume "pvc-c9b87bbe-63b2-406a-8c6f-19edd7040d06"
	I0317 13:06:17.627185       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c9b87bbe-63b2-406a-8c6f-19edd7040d06", APIVersion:"v1", ResourceVersion:"1301", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0317 13:06:17.634817       1 volume_store.go:219] persistentvolume "pvc-c9b87bbe-63b2-406a-8c6f-19edd7040d06" saved
	I0317 13:06:17.634965       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"c9b87bbe-63b2-406a-8c6f-19edd7040d06", APIVersion:"v1", ResourceVersion:"1301", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-c9b87bbe-63b2-406a-8c6f-19edd7040d06
	
	
	==> storage-provisioner [7a8d4c90bba0356568ba2acfd5f9309f417d7fbbe64073a856816abf41ed11f2] <==
	I0317 13:05:33.163490       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0317 13:05:33.166434       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-207072 -n functional-207072
helpers_test.go:261: (dbg) Run:  kubectl --context functional-207072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-qdpnr nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-207072 describe pod busybox-mount mysql-58ccfd96bb-qdpnr nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-207072 describe pod busybox-mount mysql-58ccfd96bb-qdpnr nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-207072/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 13:06:26 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://bb6baa3d83c4e4527d5e027e289d12ff7136cf9e2dfd21796d24e59d049a2025
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 17 Mar 2025 13:06:29 +0000
	      Finished:     Mon, 17 Mar 2025 13:06:29 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68dch (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-68dch:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m54s  default-scheduler  Successfully assigned default/busybox-mount to functional-207072
	  Normal  Pulling    2m54s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.492s (2.492s including waiting). Image size: 2395207 bytes.
	  Normal  Created    2m51s  kubelet            Created container: mount-munger
	  Normal  Started    2m51s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-qdpnr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-207072/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 13:06:42 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6npgh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6npgh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m38s                default-scheduler  Successfully assigned default/mysql-58ccfd96bb-qdpnr to functional-207072
	  Normal   Pulling    67s (x4 over 2m38s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     65s (x4 over 2m35s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     65s (x4 over 2m35s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x8 over 2m35s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     10s (x8 over 2m35s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-207072/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 13:06:11 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5zfk9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5zfk9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m8s                 default-scheduler  Successfully assigned default/nginx-svc to functional-207072
	  Warning  Failed     3m5s                 kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    24s (x10 over 3m5s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     24s (x10 over 3m5s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    13s (x5 over 3m8s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     11s (x5 over 3m5s)   kubelet            Error: ErrImagePull
	  Warning  Failed     11s (x4 over 2m49s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-207072/192.168.49.2
	Start Time:       Mon, 17 Mar 2025 13:06:17 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xm7f4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-xm7f4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-207072
	  Warning  Failed     2m10s (x3 over 3m)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:9d6b58feebd2dbd3c56ab5853333d627cc6e281011cfd6050fa4bcf2072c9496: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    80s (x4 over 3m2s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     77s (x4 over 3m)     kubelet            Error: ErrImagePull
	  Warning  Failed     77s                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    1s (x11 over 2m59s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     1s (x11 over 2m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-207072 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f3ed54d9-4ffa-4899-bdf5-04589b2116cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-207072 -n functional-207072
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-03-17 13:10:12.075090505 +0000 UTC m=+1916.326940126
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-207072 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-207072 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-207072/192.168.49.2
Start Time:       Mon, 17 Mar 2025 13:06:11 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5zfk9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-5zfk9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-207072
Warning  Failed     3m57s                kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:a71e0884a7f1192ecf5decf062b67d46b54ad63f0cc1b8aa7e705f739a97c2fc: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    65s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     63s (x5 over 3m57s)  kubelet            Error: ErrImagePull
Warning  Failed     63s (x4 over 3m41s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    7s (x14 over 3m57s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     7s (x14 over 3m57s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-207072 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-207072 logs nginx-svc -n default: exit status 1 (67.77134ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-207072 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (1.170738815s)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:361: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image load --daemon kicbase/echo-server:functional-207072 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-207072" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image load --daemon kicbase/echo-server:functional-207072 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-207072" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (1.143837637s)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:254: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image save kicbase/echo-server:functional-207072 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:403: expected "/home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:428: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0317 13:06:41.962238  509914 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:06:41.962367  509914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:41.962376  509914 out.go:358] Setting ErrFile to fd 2...
	I0317 13:06:41.962383  509914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:41.962629  509914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:06:41.963238  509914 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:06:41.963334  509914 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:06:41.963706  509914 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
	I0317 13:06:41.986300  509914 ssh_runner.go:195] Run: systemctl --version
	I0317 13:06:41.986358  509914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
	I0317 13:06:42.008287  509914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
	I0317 13:06:42.105379  509914 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar
	W0317 13:06:42.105439  509914 cache_images.go:253] Failed to load cached images for "functional-207072": loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar: no such file or directory
	I0317 13:06:42.105454  509914 cache_images.go:265] failed pushing to: functional-207072

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-207072
functional_test.go:436: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-207072: exit status 1 (17.797634ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-207072

                                                
                                                
** /stderr **
functional_test.go:438: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-207072

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (102.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0317 13:10:12.213419  453732 retry.go:31] will retry after 3.156197045s: Temporary Error: Get "http:": http: no Host in request URL
I0317 13:10:15.370769  453732 retry.go:31] will retry after 2.769499838s: Temporary Error: Get "http:": http: no Host in request URL
I0317 13:10:18.140517  453732 retry.go:31] will retry after 3.652235621s: Temporary Error: Get "http:": http: no Host in request URL
I0317 13:10:21.793039  453732 retry.go:31] will retry after 13.332395107s: Temporary Error: Get "http:": http: no Host in request URL
I0317 13:10:35.125892  453732 retry.go:31] will retry after 20.386296682s: Temporary Error: Get "http:": http: no Host in request URL
I0317 13:10:55.513321  453732 retry.go:31] will retry after 24.956899647s: Temporary Error: Get "http:": http: no Host in request URL
I0317 13:11:20.471496  453732 retry.go:31] will retry after 34.548794998s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-207072 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.97.239.239   10.97.239.239   80:30906/TCP   5m44s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (102.87s)

                                                
                                    

Test pass (291/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.38
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.32.2/json-events 13.59
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.07
18 TestDownloadOnly/v1.32.2/DeleteAll 0.23
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 1.15
21 TestBinaryMirror 0.82
22 TestOffline 55.95
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 206.57
29 TestAddons/serial/Volcano 40.64
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 8.49
35 TestAddons/parallel/Registry 16.77
37 TestAddons/parallel/InspektorGadget 11.86
38 TestAddons/parallel/MetricsServer 7.14
40 TestAddons/parallel/CSI 56.94
41 TestAddons/parallel/Headlamp 17.61
42 TestAddons/parallel/CloudSpanner 5.59
44 TestAddons/parallel/NvidiaDevicePlugin 5.89
45 TestAddons/parallel/Yakd 10.76
46 TestAddons/parallel/AmdGpuDevicePlugin 5.9
47 TestAddons/StoppedEnableDisable 12.21
48 TestCertOptions 28.27
49 TestCertExpiration 214.58
51 TestForceSystemdFlag 29.67
52 TestForceSystemdEnv 38.37
54 TestKVMDriverInstallOrUpdate 4.97
58 TestErrorSpam/setup 24.33
59 TestErrorSpam/start 0.64
60 TestErrorSpam/status 0.97
61 TestErrorSpam/pause 1.66
62 TestErrorSpam/unpause 1.78
63 TestErrorSpam/stop 1.41
66 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 97.35
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.84
75 TestFunctional/serial/CacheCmd/cache/add_local 2.21
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 39.49
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.5
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 4.09
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 15.25
91 TestFunctional/parallel/DryRun 0.39
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.99
97 TestFunctional/parallel/ServiceCmdConnect 11.53
98 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/SSHCmd 0.69
102 TestFunctional/parallel/CpCmd 1.94
103 TestFunctional/parallel/MySQL 365.78
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.94
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
113 TestFunctional/parallel/License 0.67
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.5
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/ServiceCmd/DeployApp 10.15
120 TestFunctional/parallel/ServiceCmd/List 0.5
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
124 TestFunctional/parallel/ServiceCmd/Format 0.38
125 TestFunctional/parallel/ProfileCmd/profile_list 0.42
126 TestFunctional/parallel/ServiceCmd/URL 0.37
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
128 TestFunctional/parallel/MountCmd/any-port 8.63
129 TestFunctional/parallel/MountCmd/specific-port 1.85
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
134 TestFunctional/parallel/Version/short 0.06
135 TestFunctional/parallel/Version/components 0.51
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
140 TestFunctional/parallel/ImageCommands/ImageBuild 4.35
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 96.74
162 TestMultiControlPlane/serial/DeployApp 5.45
163 TestMultiControlPlane/serial/PingHostFromPods 1.14
164 TestMultiControlPlane/serial/AddWorkerNode 22.24
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
167 TestMultiControlPlane/serial/CopyFile 17.77
168 TestMultiControlPlane/serial/StopSecondaryNode 12.71
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
170 TestMultiControlPlane/serial/RestartSecondaryNode 16.25
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 109.51
173 TestMultiControlPlane/serial/DeleteSecondaryNode 9.43
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
175 TestMultiControlPlane/serial/StopCluster 36.3
176 TestMultiControlPlane/serial/RestartCluster 71.98
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
178 TestMultiControlPlane/serial/AddSecondaryNode 37.14
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
183 TestJSONOutput/start/Command 51.2
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.63
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.77
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
208 TestKicCustomNetwork/create_custom_network 39.51
209 TestKicCustomNetwork/use_default_bridge_network 24.28
210 TestKicExistingNetwork 27.08
211 TestKicCustomSubnet 28.62
212 TestKicStaticIP 27.63
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 54.91
217 TestMountStart/serial/StartWithMountFirst 6.16
218 TestMountStart/serial/VerifyMountFirst 0.27
219 TestMountStart/serial/StartWithMountSecond 8.73
220 TestMountStart/serial/VerifyMountSecond 0.27
221 TestMountStart/serial/DeleteFirst 1.68
222 TestMountStart/serial/VerifyMountPostDelete 0.27
223 TestMountStart/serial/Stop 1.18
224 TestMountStart/serial/RestartStopped 7.9
225 TestMountStart/serial/VerifyMountPostStop 0.27
228 TestMultiNode/serial/FreshStart2Nodes 60.31
229 TestMultiNode/serial/DeployApp2Nodes 18.83
230 TestMultiNode/serial/PingHostFrom2Pods 0.78
231 TestMultiNode/serial/AddNode 18.09
232 TestMultiNode/serial/MultiNodeLabels 0.08
233 TestMultiNode/serial/ProfileList 0.72
234 TestMultiNode/serial/CopyFile 9.9
235 TestMultiNode/serial/StopNode 2.23
236 TestMultiNode/serial/StartAfterStop 8.79
237 TestMultiNode/serial/RestartKeepsNodes 86.48
238 TestMultiNode/serial/DeleteNode 5.17
239 TestMultiNode/serial/StopMultiNode 23.96
240 TestMultiNode/serial/RestartMultiNode 53.11
241 TestMultiNode/serial/ValidateNameConflict 25.73
246 TestPreload 125.61
248 TestScheduledStopUnix 100.31
251 TestInsufficientStorage 12.84
252 TestRunningBinaryUpgrade 65.3
254 TestKubernetesUpgrade 323.7
255 TestMissingContainerUpgrade 96.57
256 TestStoppedBinaryUpgrade/Setup 2.63
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 34.75
260 TestStoppedBinaryUpgrade/Upgrade 146.05
261 TestNoKubernetes/serial/StartWithStopK8s 17.5
262 TestNoKubernetes/serial/Start 5.56
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
264 TestNoKubernetes/serial/ProfileList 5.36
265 TestNoKubernetes/serial/Stop 1.86
266 TestNoKubernetes/serial/StartNoArgs 7.83
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
282 TestNetworkPlugins/group/false 3.66
287 TestPause/serial/Start 45.72
288 TestStoppedBinaryUpgrade/MinikubeLogs 2.36
289 TestPause/serial/SecondStartNoReconfiguration 6.72
290 TestPause/serial/Pause 0.8
291 TestPause/serial/VerifyStatus 0.41
292 TestPause/serial/Unpause 0.75
293 TestPause/serial/PauseAgain 0.9
294 TestPause/serial/DeletePaused 5.99
295 TestPause/serial/VerifyDeletedResources 0.84
297 TestStartStop/group/old-k8s-version/serial/FirstStart 109.63
299 TestStartStop/group/no-preload/serial/FirstStart 61.25
301 TestStartStop/group/embed-certs/serial/FirstStart 52.32
302 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
304 TestStartStop/group/old-k8s-version/serial/Stop 12.08
305 TestStartStop/group/no-preload/serial/DeployApp 10.26
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/old-k8s-version/serial/SecondStart 29.26
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
309 TestStartStop/group/no-preload/serial/Stop 13.23
310 TestStartStop/group/embed-certs/serial/DeployApp 9.29
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 264.25
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
314 TestStartStop/group/embed-certs/serial/Stop 13.13
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 26.01
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
317 TestStartStop/group/embed-certs/serial/SecondStart 265.17
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
320 TestStartStop/group/old-k8s-version/serial/Pause 2.99
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 45.86
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.98
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
327 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 265.14
329 TestStartStop/group/newest-cni/serial/FirstStart 30.99
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
332 TestStartStop/group/newest-cni/serial/Stop 1.84
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
334 TestStartStop/group/newest-cni/serial/SecondStart 13.52
335 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
338 TestStartStop/group/newest-cni/serial/Pause 3.01
339 TestNetworkPlugins/group/auto/Start 55.01
340 TestNetworkPlugins/group/auto/KubeletFlags 0.29
341 TestNetworkPlugins/group/auto/NetCatPod 9.21
342 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
343 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
344 TestNetworkPlugins/group/auto/DNS 0.13
345 TestNetworkPlugins/group/auto/Localhost 0.11
346 TestNetworkPlugins/group/auto/HairPin 0.12
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
348 TestStartStop/group/no-preload/serial/Pause 3.06
349 TestNetworkPlugins/group/kindnet/Start 60.13
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
351 TestNetworkPlugins/group/calico/Start 56.36
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
353 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
354 TestStartStop/group/embed-certs/serial/Pause 3.25
355 TestNetworkPlugins/group/custom-flannel/Start 43.98
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
358 TestNetworkPlugins/group/kindnet/NetCatPod 9.2
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
361 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
362 TestNetworkPlugins/group/calico/KubeletFlags 0.29
363 TestNetworkPlugins/group/calico/NetCatPod 8.2
364 TestNetworkPlugins/group/kindnet/DNS 0.13
365 TestNetworkPlugins/group/kindnet/Localhost 0.11
366 TestNetworkPlugins/group/kindnet/HairPin 0.11
367 TestNetworkPlugins/group/custom-flannel/DNS 0.14
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
370 TestNetworkPlugins/group/calico/DNS 0.16
371 TestNetworkPlugins/group/calico/Localhost 0.14
372 TestNetworkPlugins/group/calico/HairPin 0.12
373 TestNetworkPlugins/group/enable-default-cni/Start 70.46
374 TestNetworkPlugins/group/flannel/Start 45.99
375 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
376 TestNetworkPlugins/group/bridge/Start 69.21
377 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
378 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
379 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.42
380 TestNetworkPlugins/group/flannel/ControllerPod 6.01
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
382 TestNetworkPlugins/group/flannel/NetCatPod 8.19
383 TestNetworkPlugins/group/flannel/DNS 0.15
384 TestNetworkPlugins/group/flannel/Localhost 0.11
385 TestNetworkPlugins/group/flannel/HairPin 0.11
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 9.22
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
393 TestNetworkPlugins/group/bridge/DNS 0.14
394 TestNetworkPlugins/group/bridge/Localhost 0.13
395 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (15.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-960465 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-960465 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (15.381412603s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0317 12:38:31.174177  453732 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0317 12:38:31.174414  453732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-960465
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-960465: exit status 85 (70.294086ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-960465 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |          |
	|         | -p download-only-960465        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:38:15
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:38:15.842204  453753 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:38:15.842526  453753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:15.842540  453753 out.go:358] Setting ErrFile to fd 2...
	I0317 12:38:15.842547  453753 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:15.842823  453753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	W0317 12:38:15.843010  453753 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20539-446828/.minikube/config/config.json: open /home/jenkins/minikube-integration/20539-446828/.minikube/config/config.json: no such file or directory
	I0317 12:38:15.843677  453753 out.go:352] Setting JSON to true
	I0317 12:38:15.844809  453753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8436,"bootTime":1742206660,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:38:15.844945  453753 start.go:139] virtualization: kvm guest
	I0317 12:38:15.847272  453753 out.go:97] [download-only-960465] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0317 12:38:15.847552  453753 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball: no such file or directory
	I0317 12:38:15.847584  453753 notify.go:220] Checking for updates...
	I0317 12:38:15.849002  453753 out.go:169] MINIKUBE_LOCATION=20539
	I0317 12:38:15.850534  453753 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:38:15.852367  453753 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:38:15.853876  453753 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 12:38:15.855396  453753 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0317 12:38:15.858112  453753 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 12:38:15.858415  453753 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:38:15.883230  453753 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 12:38:15.883356  453753 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:15.936235  453753 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 12:38:15.926537165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:15.936370  453753 docker.go:318] overlay module found
	I0317 12:38:15.938254  453753 out.go:97] Using the docker driver based on user configuration
	I0317 12:38:15.938306  453753 start.go:297] selected driver: docker
	I0317 12:38:15.938314  453753 start.go:901] validating driver "docker" against <nil>
	I0317 12:38:15.938418  453753 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:15.990811  453753 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 12:38:15.981360823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:15.991187  453753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:38:15.991764  453753 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0317 12:38:15.991997  453753 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 12:38:15.993763  453753 out.go:169] Using Docker driver with root privileges
	I0317 12:38:15.995020  453753 cni.go:84] Creating CNI manager for ""
	I0317 12:38:15.995153  453753 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:38:15.995175  453753 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 12:38:15.995271  453753 start.go:340] cluster config:
	{Name:download-only-960465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-960465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:38:15.996910  453753 out.go:97] Starting "download-only-960465" primary control-plane node in "download-only-960465" cluster
	I0317 12:38:15.996956  453753 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 12:38:15.998459  453753 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0317 12:38:15.998507  453753 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 12:38:15.998679  453753 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 12:38:16.018475  453753 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 12:38:16.018727  453753 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 12:38:16.018848  453753 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 12:38:16.513846  453753 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0317 12:38:16.513884  453753 cache.go:56] Caching tarball of preloaded images
	I0317 12:38:16.514094  453753 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 12:38:16.516572  453753 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0317 12:38:16.516619  453753 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0317 12:38:16.627434  453753 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0317 12:38:21.351055  453753 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0317 12:38:29.415932  453753 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0317 12:38:29.416036  453753 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0317 12:38:30.357130  453753 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0317 12:38:30.357531  453753 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/download-only-960465/config.json ...
	I0317 12:38:30.357567  453753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/download-only-960465/config.json: {Name:mk78078bba2f482eb35f1abcb4f902ade4b2e9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0317 12:38:30.357732  453753 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0317 12:38:30.357911  453753 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20539-446828/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-960465 host does not exist
	  To start a cluster, run: "minikube start -p download-only-960465"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-960465
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (13.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498596 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498596 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.586775544s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (13.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0317 12:38:45.217853  453732 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0317 12:38:45.217911  453732 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498596
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498596: exit status 85 (70.852492ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-960465 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | -p download-only-960465        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| delete  | -p download-only-960465        | download-only-960465 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC | 17 Mar 25 12:38 UTC |
	| start   | -o=json --download-only        | download-only-498596 | jenkins | v1.35.0 | 17 Mar 25 12:38 UTC |                     |
	|         | -p download-only-498596        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/03/17 12:38:31
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0317 12:38:31.678938  454116 out.go:345] Setting OutFile to fd 1 ...
	I0317 12:38:31.679223  454116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:31.679233  454116 out.go:358] Setting ErrFile to fd 2...
	I0317 12:38:31.679238  454116 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 12:38:31.679450  454116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 12:38:31.680037  454116 out.go:352] Setting JSON to true
	I0317 12:38:31.681088  454116 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":8452,"bootTime":1742206660,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 12:38:31.681223  454116 start.go:139] virtualization: kvm guest
	I0317 12:38:31.683670  454116 out.go:97] [download-only-498596] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 12:38:31.683934  454116 notify.go:220] Checking for updates...
	I0317 12:38:31.685552  454116 out.go:169] MINIKUBE_LOCATION=20539
	I0317 12:38:31.687366  454116 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 12:38:31.689163  454116 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 12:38:31.690802  454116 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 12:38:31.692512  454116 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0317 12:38:31.695497  454116 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0317 12:38:31.695897  454116 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 12:38:31.721355  454116 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 12:38:31.721454  454116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:31.773975  454116 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 12:38:31.764707863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:31.774136  454116 docker.go:318] overlay module found
	I0317 12:38:31.775835  454116 out.go:97] Using the docker driver based on user configuration
	I0317 12:38:31.775864  454116 start.go:297] selected driver: docker
	I0317 12:38:31.775873  454116 start.go:901] validating driver "docker" against <nil>
	I0317 12:38:31.775996  454116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 12:38:31.827798  454116 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:true NGoroutines:44 SystemTime:2025-03-17 12:38:31.818273195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 12:38:31.828074  454116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0317 12:38:31.828843  454116 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0317 12:38:31.829068  454116 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0317 12:38:31.830756  454116 out.go:169] Using Docker driver with root privileges
	I0317 12:38:31.832119  454116 cni.go:84] Creating CNI manager for ""
	I0317 12:38:31.832222  454116 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0317 12:38:31.832235  454116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0317 12:38:31.832354  454116 start.go:340] cluster config:
	{Name:download-only-498596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-498596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 12:38:31.833671  454116 out.go:97] Starting "download-only-498596" primary control-plane node in "download-only-498596" cluster
	I0317 12:38:31.833697  454116 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0317 12:38:31.834941  454116 out.go:97] Pulling base image v0.0.46-1741860993-20523 ...
	I0317 12:38:31.834968  454116 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:38:31.835074  454116 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local docker daemon
	I0317 12:38:31.853470  454116 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 to local cache
	I0317 12:38:31.853702  454116 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory
	I0317 12:38:31.853729  454116 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 in local cache directory, skipping pull
	I0317 12:38:31.853734  454116 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 exists in cache, skipping pull
	I0317 12:38:31.853743  454116 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 as a tarball
	I0317 12:38:32.342719  454116 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	I0317 12:38:32.342770  454116 cache.go:56] Caching tarball of preloaded images
	I0317 12:38:32.343027  454116 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
	I0317 12:38:32.344877  454116 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0317 12:38:32.344909  454116 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4 ...
	I0317 12:38:32.458591  454116 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:17ec4d97c92604221650726c3857ee2a -> /home/jenkins/minikube-integration/20539-446828/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-498596 host does not exist
	  To start a cluster, run: "minikube start -p download-only-498596"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-498596
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-513231 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-513231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-513231
--- PASS: TestDownloadOnlyKic (1.15s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I0317 12:38:47.110833  453732 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-312807 --alsologtostderr --binary-mirror http://127.0.0.1:45577 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-312807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-312807
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (55.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-207380 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-207380 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (53.352518511s)
helpers_test.go:175: Cleaning up "offline-containerd-207380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-207380
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-207380: (2.594053008s)
--- PASS: TestOffline (55.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-012219
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-012219: exit status 85 (60.795962ms)

                                                
                                                
-- stdout --
	* Profile "addons-012219" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-012219"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-012219
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-012219: exit status 85 (60.693187ms)

                                                
                                                
-- stdout --
	* Profile "addons-012219" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-012219"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (206.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-012219 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-012219 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m26.573595526s)
--- PASS: TestAddons/Setup (206.57s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.64s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 10.398744ms
addons_test.go:823: volcano-controller stabilized in 10.455049ms
addons_test.go:807: volcano-scheduler stabilized in 10.489377ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-bhm6s" [b311b5f9-b6eb-4773-820a-4415c8b51825] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004029154s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-b8ljr" [3818217d-4bc6-4b14-8c8a-9ecf9cf79bf5] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004331843s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-pkrhk" [6f461ef9-bdcc-4e1b-a418-1930288d71ce] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004368642s
addons_test.go:842: (dbg) Run:  kubectl --context addons-012219 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-012219 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-012219 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e11cde98-2ed8-49ea-969c-c4f830e84f12] Pending
helpers_test.go:344: "test-job-nginx-0" [e11cde98-2ed8-49ea-969c-c4f830e84f12] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e11cde98-2ed8-49ea-969c-c4f830e84f12] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003776947s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable volcano --alsologtostderr -v=1: (11.271919694s)
--- PASS: TestAddons/serial/Volcano (40.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-012219 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-012219 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-012219 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-012219 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec3979a3-7b5c-4901-bbd1-12f67b2ec7a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec3979a3-7b5c-4901-bbd1-12f67b2ec7a6] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003621131s
addons_test.go:633: (dbg) Run:  kubectl --context addons-012219 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-012219 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-012219 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 24.238372ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-qxwgl" [455262b9-8f7c-405f-8f6a-e11619b4a82b] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004506712s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6mr4n" [1ff4a6b3-772a-4bb4-b071-5fda919d74bb] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003568532s
addons_test.go:331: (dbg) Run:  kubectl --context addons-012219 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-012219 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-012219 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.88049438s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 ip
2025/03/17 12:43:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-48x5w" [eb5c16ff-96f5-467b-be8b-6e4929f8ed16] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002673463s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable inspektor-gadget --alsologtostderr -v=1: (5.855351139s)
--- PASS: TestAddons/parallel/InspektorGadget (11.86s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.14s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.179641ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-rmd9f" [457e13af-aba0-4869-9953-d240bdcf8c93] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002873069s
addons_test.go:402: (dbg) Run:  kubectl --context addons-012219 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable metrics-server --alsologtostderr -v=1: (1.054976202s)
--- PASS: TestAddons/parallel/MetricsServer (7.14s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0317 12:43:25.855540  453732 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0317 12:43:25.861825  453732 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0317 12:43:25.861864  453732 kapi.go:107] duration metric: took 6.339685ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 6.35482ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-012219 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-012219 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bfd12a28-bc31-4467-b439-06c2ac31c3d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bfd12a28-bc31-4467-b439-06c2ac31c3d7] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004220342s
addons_test.go:511: (dbg) Run:  kubectl --context addons-012219 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-012219 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-012219 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-012219 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-012219 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-012219 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-012219 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-012219 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fa3f676d-d176-4235-9a46-8136415f1fa0] Pending
helpers_test.go:344: "task-pv-pod-restore" [fa3f676d-d176-4235-9a46-8136415f1fa0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fa3f676d-d176-4235-9a46-8136415f1fa0] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004355066s
addons_test.go:553: (dbg) Run:  kubectl --context addons-012219 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-012219 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-012219 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.68414366s)
--- PASS: TestAddons/parallel/CSI (56.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-012219 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-vv9f2" [4056d2b4-450b-4912-ae2b-7e2d3c8d0cc3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-vv9f2" [4056d2b4-450b-4912-ae2b-7e2d3c8d0cc3] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-vv9f2" [4056d2b4-450b-4912-ae2b-7e2d3c8d0cc3] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003476007s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable headlamp --alsologtostderr -v=1: (5.757810256s)
--- PASS: TestAddons/parallel/Headlamp (17.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-kr529" [97e8be75-f5d1-4f98-9523-61ec07d77b95] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004040752s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s96nr" [dd2959e8-cb33-4011-825c-beffbbfe67f2] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004445772s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.89s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-58klh" [0a6e0335-8c2f-429f-832c-918b213b46c9] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003797364s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012219 addons disable yakd --alsologtostderr -v=1: (5.759644649s)
--- PASS: TestAddons/parallel/Yakd (10.76s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.9s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-vshjt" [f90dc780-3781-4dfa-aa72-9f01de540522] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004759559s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-012219 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-012219
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-012219: (11.934110619s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-012219
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-012219
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-012219
--- PASS: TestAddons/StoppedEnableDisable (12.21s)

                                                
                                    
x
+
TestCertOptions (28.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-934442 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-934442 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (25.678104723s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-934442 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-934442 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-934442 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-934442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-934442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-934442: (1.94859687s)
--- PASS: TestCertOptions (28.27s)

                                                
                                    
x
+
TestCertExpiration (214.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-193618 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-193618 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.470031264s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-193618 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-193618 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.750401286s)
helpers_test.go:175: Cleaning up "cert-expiration-193618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-193618
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-193618: (2.363165367s)
--- PASS: TestCertExpiration (214.58s)

                                                
                                    
x
+
TestForceSystemdFlag (29.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-998522 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-998522 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.204946009s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-998522 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-998522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-998522
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-998522: (2.047669893s)
--- PASS: TestForceSystemdFlag (29.67s)

                                                
                                    
x
+
TestForceSystemdEnv (38.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-264712 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-264712 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.441587899s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-264712 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-264712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-264712
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-264712: (5.509646357s)
--- PASS: TestForceSystemdEnv (38.37s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.97s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0317 13:37:22.735257  453732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 13:37:22.735376  453732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0317 13:37:22.781483  453732 install.go:62] docker-machine-driver-kvm2: exit status 1
W0317 13:37:22.781662  453732 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0317 13:37:22.781727  453732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3483339833/001/docker-machine-driver-kvm2
I0317 13:37:23.032652  453732 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3483339833/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0006a55a8 gz:0xc0006a5690 tar:0xc0006a5640 tar.bz2:0xc0006a5650 tar.gz:0xc0006a5660 tar.xz:0xc0006a5670 tar.zst:0xc0006a5680 tbz2:0xc0006a5650 tgz:0xc0006a5660 txz:0xc0006a5670 tzst:0xc0006a5680 xz:0xc0006a5698 zip:0xc0006a56a0 zst:0xc0006a56b0] Getters:map[file:0xc00008aea0 http:0xc000796780 https:0xc0007967d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0317 13:37:23.032723  453732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3483339833/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.97s)

                                                
                                    
x
+
TestErrorSpam/setup (24.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-886205 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-886205 --driver=docker  --container-runtime=containerd
E0317 12:52:55.545660  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-886205 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-886205 --driver=docker  --container-runtime=containerd: (24.334493856s)
--- PASS: TestErrorSpam/setup (24.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 stop: (1.214674076s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-886205 --log_dir /tmp/nospam-886205 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20539-446828/.minikube/files/etc/test/nested/copy/453732/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (97.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0317 13:03:38.746243  453732 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207072 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-207072 --alsologtostderr -v=8: (1m37.348637077s)
functional_test.go:680: soft start took 1m37.349463676s for "functional-207072" cluster.
I0317 13:05:16.095326  453732 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (97.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-207072 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-207072 /tmp/TestFunctionalserialCacheCmdcacheadd_local1320121224/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cache add minikube-local-cache-test:functional-207072
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-207072 cache add minikube-local-cache-test:functional-207072: (1.869215241s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cache delete minikube-local-cache-test:functional-207072
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-207072
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.42926ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 kubectl -- --context functional-207072 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-207072 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207072 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-207072 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.486451211s)
functional_test.go:778: restart took 39.486600581s for "functional-207072" cluster.
I0317 13:06:03.162978  453732 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (39.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-207072 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-207072 logs: (1.495053837s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 logs --file /tmp/TestFunctionalserialLogsFileCmd3205104625/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-207072 logs --file /tmp/TestFunctionalserialLogsFileCmd3205104625/001/logs.txt: (1.512889796s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-207072 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-207072
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-207072: exit status 115 (343.267051ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31843 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-207072 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 config get cpus: exit status 14 (80.334817ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 config get cpus: exit status 14 (60.310764ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-207072 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-207072 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 507426: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207072 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-207072 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (172.998948ms)

                                                
                                                
-- stdout --
	* [functional-207072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:06:26.294243  506986 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:06:26.294523  506986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:26.294536  506986 out.go:358] Setting ErrFile to fd 2...
	I0317 13:06:26.294540  506986 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:26.294733  506986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:06:26.295338  506986 out.go:352] Setting JSON to false
	I0317 13:06:26.297309  506986 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10126,"bootTime":1742206660,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:06:26.297480  506986 start.go:139] virtualization: kvm guest
	I0317 13:06:26.300469  506986 out.go:177] * [functional-207072] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:06:26.302243  506986 notify.go:220] Checking for updates...
	I0317 13:06:26.302329  506986 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:06:26.306483  506986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:06:26.307923  506986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 13:06:26.309368  506986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 13:06:26.310789  506986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:06:26.312538  506986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:06:26.314581  506986 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:06:26.315106  506986 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:06:26.347092  506986 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:06:26.347203  506986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:06:26.400464  506986 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 13:06:26.390304968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 13:06:26.400587  506986 docker.go:318] overlay module found
	I0317 13:06:26.402397  506986 out.go:177] * Using the docker driver based on existing profile
	I0317 13:06:26.403893  506986 start.go:297] selected driver: docker
	I0317 13:06:26.403918  506986 start.go:901] validating driver "docker" against &{Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:06:26.404032  506986 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:06:26.406381  506986 out.go:201] 
	W0317 13:06:26.408000  506986 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0317 13:06:26.409423  506986 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207072 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-207072 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-207072 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (162.025863ms)

                                                
                                                
-- stdout --
	* [functional-207072] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:06:26.131339  506896 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:06:26.131603  506896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:26.131612  506896 out.go:358] Setting ErrFile to fd 2...
	I0317 13:06:26.131615  506896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:06:26.131917  506896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:06:26.132550  506896 out.go:352] Setting JSON to false
	I0317 13:06:26.133661  506896 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10126,"bootTime":1742206660,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:06:26.133769  506896 start.go:139] virtualization: kvm guest
	I0317 13:06:26.135946  506896 out.go:177] * [functional-207072] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0317 13:06:26.137666  506896 notify.go:220] Checking for updates...
	I0317 13:06:26.137709  506896 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:06:26.138841  506896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:06:26.140108  506896 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 13:06:26.141489  506896 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 13:06:26.142858  506896 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:06:26.144210  506896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:06:26.145968  506896 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:06:26.146453  506896 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:06:26.170996  506896 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:06:26.171141  506896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:06:26.225648  506896 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-03-17 13:06:26.21639123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 13:06:26.225775  506896 docker.go:318] overlay module found
	I0317 13:06:26.227659  506896 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0317 13:06:26.228986  506896 start.go:297] selected driver: docker
	I0317 13:06:26.229007  506896 start.go:901] validating driver "docker" against &{Name:functional-207072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-207072 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0317 13:06:26.229115  506896 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:06:26.231218  506896 out.go:201] 
	W0317 13:06:26.232542  506896 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0317 13:06:26.234154  506896 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-207072 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-207072 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-l2sm7" [4d5bd292-8ff3-43c2-a817-9d93a9b9ad08] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-l2sm7" [4d5bd292-8ff3-43c2-a817-9d93a9b9ad08] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.00395865s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31458
functional_test.go:1692: http://192.168.49.2:31458: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-l2sm7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31458
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh -n functional-207072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cp functional-207072:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2170388430/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh -n functional-207072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh -n functional-207072 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (365.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-207072 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-qdpnr" [7cdedd78-be76-4e31-8576-904510762777] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-qdpnr" [7cdedd78-be76-4e31-8576-904510762777] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 5m57.004440605s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;": exit status 1 (128.755773ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 13:12:39.461709  453732 retry.go:31] will retry after 784.187035ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;": exit status 1 (114.052523ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 13:12:40.360911  453732 retry.go:31] will retry after 942.311714ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;": exit status 1 (108.530727ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 13:12:41.412736  453732 retry.go:31] will retry after 1.470248681s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;": exit status 1 (170.465974ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0317 13:12:43.055112  453732 retry.go:31] will retry after 4.750109572s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-207072 exec mysql-58ccfd96bb-qdpnr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (365.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/453732/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo cat /etc/test/nested/copy/453732/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/453732.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo cat /etc/ssl/certs/453732.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/453732.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo cat /usr/share/ca-certificates/453732.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/4537322.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo cat /etc/ssl/certs/4537322.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/4537322.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo cat /usr/share/ca-certificates/4537322.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-207072 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh "sudo systemctl is-active docker": exit status 1 (279.774565ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh "sudo systemctl is-active crio": exit status 1 (283.494272ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-207072 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-207072 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-207072 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 503513: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-207072 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-207072 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-207072 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-207072 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-484zz" [5f9f4abf-1053-4970-afba-e8fe7df49a2e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-484zz" [5f9f4abf-1053-4970-afba-e8fe7df49a2e] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004162796s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 service list -o json
functional_test.go:1511: Took "498.131633ms" to run "out/minikube-linux-amd64 -p functional-207072 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:30577
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "357.156775ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "60.390105ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:30577
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "372.94355ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "53.537392ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdany-port1113018573/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1742216784713494794" to /tmp/TestFunctionalparallelMountCmdany-port1113018573/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1742216784713494794" to /tmp/TestFunctionalparallelMountCmdany-port1113018573/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1742216784713494794" to /tmp/TestFunctionalparallelMountCmdany-port1113018573/001/test-1742216784713494794
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.950575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 13:06:25.007795  453732 retry.go:31] will retry after 326.398617ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 17 13:06 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 17 13:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 17 13:06 test-1742216784713494794
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh cat /mount-9p/test-1742216784713494794
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-207072 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8d7ee3f1-be25-48d3-9d98-415fab83dfd1] Pending
helpers_test.go:344: "busybox-mount" [8d7ee3f1-be25-48d3-9d98-415fab83dfd1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8d7ee3f1-be25-48d3-9d98-415fab83dfd1] Running
helpers_test.go:344: "busybox-mount" [8d7ee3f1-be25-48d3-9d98-415fab83dfd1] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8d7ee3f1-be25-48d3-9d98-415fab83dfd1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003586093s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-207072 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdany-port1113018573/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdspecific-port2515541390/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.19087ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 13:06:33.632603  453732 retry.go:31] will retry after 437.122006ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdspecific-port2515541390/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh "sudo umount -f /mount-9p": exit status 1 (297.514876ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-207072 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdspecific-port2515541390/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3045002557/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3045002557/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3045002557/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T" /mount1: exit status 1 (375.875776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0317 13:06:35.569074  453732 retry.go:31] will retry after 449.308629ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-207072 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3045002557/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3045002557/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-207072 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3045002557/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207072 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-207072
docker.io/kindest/kindnetd:v20250214-acbabc1a
docker.io/kindest/kindnetd:v20241212-9f82dd49
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207072 image ls --format short --alsologtostderr:
I0317 13:06:43.297240  510539 out.go:345] Setting OutFile to fd 1 ...
I0317 13:06:43.297745  510539 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:43.297756  510539 out.go:358] Setting ErrFile to fd 2...
I0317 13:06:43.297761  510539 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:43.297996  510539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
I0317 13:06:43.298617  510539 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:43.298713  510539 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:43.299224  510539 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
I0317 13:06:43.318290  510539 ssh_runner.go:195] Run: systemctl --version
I0317 13:06:43.318358  510539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
I0317 13:06:43.338583  510539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
I0317 13:06:43.433551  510539 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207072 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/kube-proxy                  | v1.32.2            | sha256:f13328 | 30.9MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20250214-acbabc1a | sha256:df3849 | 39MB   |
| docker.io/library/minikube-local-cache-test | functional-207072  | sha256:9329c3 | 990B   |
| registry.k8s.io/kube-apiserver              | v1.32.2            | sha256:85b7a1 | 28.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-scheduler              | v1.32.2            | sha256:d8e673 | 20.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| localhost/my-image                          | functional-207072  | sha256:fdb174 | 775kB  |
| registry.k8s.io/kube-controller-manager     | v1.32.2            | sha256:b6a454 | 26.3MB |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:d30084 | 39MB   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207072 image ls --format table --alsologtostderr:
I0317 13:06:48.337230  511070 out.go:345] Setting OutFile to fd 1 ...
I0317 13:06:48.337488  511070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:48.337497  511070 out.go:358] Setting ErrFile to fd 2...
I0317 13:06:48.337500  511070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:48.337676  511070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
I0317 13:06:48.338295  511070 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:48.338392  511070 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:48.338729  511070 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
I0317 13:06:48.358630  511070 ssh_runner.go:195] Run: systemctl --version
I0317 13:06:48.358697  511070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
I0317 13:06:48.377283  511070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
I0317 13:06:48.469747  511070 ssh_runner.go:195] Run: sudo crictl images --output json
E0317 13:07:14.568288  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:08:37.640983  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207072 image ls --format json --alsologtostderr:
[{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:9329c3fc892cad127fcd2288c1eae9825fd174836b79d0b47d880850bd6482ec","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-207072"],"size":"990"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef8
18e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"57680541"},{"id":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":["registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"30907858"},{"id":"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"39008320"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256
:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:fdb174425d85a8ac81c7c9c768d694533af86bf4aee6312f9ca9fcaa90eb9301","repoDigests":[],"repoTags":["localhost/my-image:functional-207072"],"size":"774887"},{"id":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"20657902"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de3
2613d03a9a614a21058741162ef90"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"26259392"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f","repoDigests":["docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495"],"repoTags":["docker.io/kindest/kindnetd:v20250214-acbabc1a"],"size":"38996835"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:85b7a174738baecbc53029b7913cd430a2060
e0cbdb5f56c7957d32ff7f241ef","repoDigests":["registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"28670731"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207072 image ls --format json --alsologtostderr:
I0317 13:06:48.110612  511021 out.go:345] Setting OutFile to fd 1 ...
I0317 13:06:48.110949  511021 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:48.110961  511021 out.go:358] Setting ErrFile to fd 2...
I0317 13:06:48.110971  511021 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:48.111166  511021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
I0317 13:06:48.112511  511021 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:48.112788  511021 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:48.113630  511021 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
I0317 13:06:48.132960  511021 ssh_runner.go:195] Run: systemctl --version
I0317 13:06:48.133021  511021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
I0317 13:06:48.152676  511021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
I0317 13:06:48.245163  511021 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-207072 image ls --format yaml --alsologtostderr:
- id: sha256:df3849d954c98a7162c7bee7313ece357606e313d98ebd68b7aac5e961b1156f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495
repoTags:
- docker.io/kindest/kindnetd:v20250214-acbabc1a
size: "38996835"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:9329c3fc892cad127fcd2288c1eae9825fd174836b79d0b47d880850bd6482ec
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-207072
size: "990"
- id: sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "28670731"
- id: sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "20657902"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "26259392"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "39008320"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"
- id: sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "30907858"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207072 image ls --format yaml --alsologtostderr:
I0317 13:06:43.528052  510603 out.go:345] Setting OutFile to fd 1 ...
I0317 13:06:43.528309  510603 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:43.528355  510603 out.go:358] Setting ErrFile to fd 2...
I0317 13:06:43.528361  510603 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:43.528569  510603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
I0317 13:06:43.529170  510603 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:43.529299  510603 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:43.529705  510603 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
I0317 13:06:43.548787  510603 ssh_runner.go:195] Run: systemctl --version
I0317 13:06:43.548848  510603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
I0317 13:06:43.568779  510603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
I0317 13:06:43.665848  510603 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-207072 ssh pgrep buildkitd: exit status 1 (270.596658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image build -t localhost/my-image:functional-207072 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-207072 image build -t localhost/my-image:functional-207072 testdata/build --alsologtostderr: (3.850302143s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-207072 image build -t localhost/my-image:functional-207072 testdata/build --alsologtostderr:
I0317 13:06:44.031437  510747 out.go:345] Setting OutFile to fd 1 ...
I0317 13:06:44.032123  510747 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:44.032162  510747 out.go:358] Setting ErrFile to fd 2...
I0317 13:06:44.032229  510747 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 13:06:44.032704  510747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
I0317 13:06:44.033842  510747 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:44.034421  510747 config.go:182] Loaded profile config "functional-207072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0317 13:06:44.034772  510747 cli_runner.go:164] Run: docker container inspect functional-207072 --format={{.State.Status}}
I0317 13:06:44.054267  510747 ssh_runner.go:195] Run: systemctl --version
I0317 13:06:44.054336  510747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-207072
I0317 13:06:44.075508  510747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/functional-207072/id_rsa Username:docker}
I0317 13:06:44.173098  510747 build_images.go:161] Building image from path: /tmp/build.1817854865.tar
I0317 13:06:44.173191  510747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0317 13:06:44.182689  510747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1817854865.tar
I0317 13:06:44.186317  510747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1817854865.tar: stat -c "%s %y" /var/lib/minikube/build/build.1817854865.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1817854865.tar': No such file or directory
I0317 13:06:44.186360  510747 ssh_runner.go:362] scp /tmp/build.1817854865.tar --> /var/lib/minikube/build/build.1817854865.tar (3072 bytes)
I0317 13:06:44.211234  510747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1817854865
I0317 13:06:44.220493  510747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1817854865 -xf /var/lib/minikube/build/build.1817854865.tar
I0317 13:06:44.230763  510747 containerd.go:394] Building image: /var/lib/minikube/build/build.1817854865
I0317 13:06:44.230849  510747 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1817854865 --local dockerfile=/var/lib/minikube/build/build.1817854865 --output type=image,name=localhost/my-image:functional-207072
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3fa3581b56ddb233f0d8ae8b18b4593214ff90b4cd262eb60e0904e6b7fddea8 done
#8 exporting config sha256:fdb174425d85a8ac81c7c9c768d694533af86bf4aee6312f9ca9fcaa90eb9301
#8 exporting config sha256:fdb174425d85a8ac81c7c9c768d694533af86bf4aee6312f9ca9fcaa90eb9301 done
#8 naming to localhost/my-image:functional-207072 done
#8 DONE 0.1s
I0317 13:06:47.806032  510747 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1817854865 --local dockerfile=/var/lib/minikube/build/build.1817854865 --output type=image,name=localhost/my-image:functional-207072: (3.57514767s)
I0317 13:06:47.806115  510747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1817854865
I0317 13:06:47.816276  510747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1817854865.tar
I0317 13:06:47.825214  510747 build_images.go:217] Built localhost/my-image:functional-207072 from /tmp/build.1817854865.tar
I0317 13:06:47.825256  510747 build_images.go:133] succeeded building to: functional-207072
I0317 13:06:47.825262  510747 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image rm kicbase/echo-server:functional-207072 --alsologtostderr
2025/03/17 13:06:41 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-207072 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-207072 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0317 13:12:14.568280  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-207072
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-207072
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-207072
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (96.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-840729 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-840729 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m35.992782833s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (96.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-840729 -- rollout status deployment/busybox: (3.391339871s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-5smsm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-cvd8b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-dqjdt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-5smsm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-cvd8b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-dqjdt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-5smsm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-cvd8b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-dqjdt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-5smsm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-5smsm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-cvd8b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-cvd8b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-dqjdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-840729 -- exec busybox-58667487b6-dqjdt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-840729 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-840729 -v=7 --alsologtostderr: (21.338362461s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-840729 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp testdata/cp-test.txt ha-840729:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2161631985/001/cp-test_ha-840729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729:/home/docker/cp-test.txt ha-840729-m02:/home/docker/cp-test_ha-840729_ha-840729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test_ha-840729_ha-840729-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729:/home/docker/cp-test.txt ha-840729-m03:/home/docker/cp-test_ha-840729_ha-840729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test_ha-840729_ha-840729-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729:/home/docker/cp-test.txt ha-840729-m04:/home/docker/cp-test_ha-840729_ha-840729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test_ha-840729_ha-840729-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp testdata/cp-test.txt ha-840729-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2161631985/001/cp-test_ha-840729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m02:/home/docker/cp-test.txt ha-840729:/home/docker/cp-test_ha-840729-m02_ha-840729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test_ha-840729-m02_ha-840729.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m02:/home/docker/cp-test.txt ha-840729-m03:/home/docker/cp-test_ha-840729-m02_ha-840729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test_ha-840729-m02_ha-840729-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m02:/home/docker/cp-test.txt ha-840729-m04:/home/docker/cp-test_ha-840729-m02_ha-840729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test_ha-840729-m02_ha-840729-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp testdata/cp-test.txt ha-840729-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2161631985/001/cp-test_ha-840729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m03:/home/docker/cp-test.txt ha-840729:/home/docker/cp-test_ha-840729-m03_ha-840729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test_ha-840729-m03_ha-840729.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m03:/home/docker/cp-test.txt ha-840729-m02:/home/docker/cp-test_ha-840729-m03_ha-840729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test_ha-840729-m03_ha-840729-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m03:/home/docker/cp-test.txt ha-840729-m04:/home/docker/cp-test_ha-840729-m03_ha-840729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test_ha-840729-m03_ha-840729-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp testdata/cp-test.txt ha-840729-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2161631985/001/cp-test_ha-840729-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m04:/home/docker/cp-test.txt ha-840729:/home/docker/cp-test_ha-840729-m04_ha-840729.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729 "sudo cat /home/docker/cp-test_ha-840729-m04_ha-840729.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m04:/home/docker/cp-test.txt ha-840729-m02:/home/docker/cp-test_ha-840729-m04_ha-840729-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m02 "sudo cat /home/docker/cp-test_ha-840729-m04_ha-840729-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 cp ha-840729-m04:/home/docker/cp-test.txt ha-840729-m03:/home/docker/cp-test_ha-840729-m04_ha-840729-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 ssh -n ha-840729-m03 "sudo cat /home/docker/cp-test_ha-840729-m04_ha-840729-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-840729 node stop m02 -v=7 --alsologtostderr: (11.977635362s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr: exit status 7 (731.758241ms)

                                                
                                                
-- stdout --
	ha-840729
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-840729-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-840729-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-840729-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:15:28.621092  535048 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:15:28.621372  535048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:28.621382  535048 out.go:358] Setting ErrFile to fd 2...
	I0317 13:15:28.621386  535048 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:15:28.621603  535048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:15:28.621805  535048 out.go:352] Setting JSON to false
	I0317 13:15:28.621854  535048 mustload.go:65] Loading cluster: ha-840729
	I0317 13:15:28.622032  535048 notify.go:220] Checking for updates...
	I0317 13:15:28.622336  535048 config.go:182] Loaded profile config "ha-840729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:15:28.622363  535048 status.go:174] checking status of ha-840729 ...
	I0317 13:15:28.622873  535048 cli_runner.go:164] Run: docker container inspect ha-840729 --format={{.State.Status}}
	I0317 13:15:28.642735  535048 status.go:371] ha-840729 host status = "Running" (err=<nil>)
	I0317 13:15:28.642774  535048 host.go:66] Checking if "ha-840729" exists ...
	I0317 13:15:28.643068  535048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-840729
	I0317 13:15:28.662972  535048 host.go:66] Checking if "ha-840729" exists ...
	I0317 13:15:28.663294  535048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:15:28.663345  535048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-840729
	I0317 13:15:28.683933  535048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/ha-840729/id_rsa Username:docker}
	I0317 13:15:28.778074  535048 ssh_runner.go:195] Run: systemctl --version
	I0317 13:15:28.782883  535048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:15:28.795903  535048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:15:28.854861  535048 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:73 SystemTime:2025-03-17 13:15:28.844677801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 13:15:28.855420  535048 kubeconfig.go:125] found "ha-840729" server: "https://192.168.49.254:8443"
	I0317 13:15:28.855454  535048 api_server.go:166] Checking apiserver status ...
	I0317 13:15:28.855492  535048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:15:28.868022  535048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1586/cgroup
	I0317 13:15:28.877902  535048 api_server.go:182] apiserver freezer: "5:freezer:/docker/0c0a55dd26807be69ddb17a640152283afcea7093b68f57d35a323cfbf33d1fd/kubepods/burstable/podeec4ae47df526e8eb6ff3dbc14b0bd20/1c89b6ee256bf3db7c527540d8dd2620fc52df99e4293d0ea806131dc86bd3d3"
	I0317 13:15:28.877986  535048 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0c0a55dd26807be69ddb17a640152283afcea7093b68f57d35a323cfbf33d1fd/kubepods/burstable/podeec4ae47df526e8eb6ff3dbc14b0bd20/1c89b6ee256bf3db7c527540d8dd2620fc52df99e4293d0ea806131dc86bd3d3/freezer.state
	I0317 13:15:28.888049  535048 api_server.go:204] freezer state: "THAWED"
	I0317 13:15:28.888089  535048 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0317 13:15:28.892217  535048 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0317 13:15:28.892252  535048 status.go:463] ha-840729 apiserver status = Running (err=<nil>)
	I0317 13:15:28.892265  535048 status.go:176] ha-840729 status: &{Name:ha-840729 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:15:28.892283  535048 status.go:174] checking status of ha-840729-m02 ...
	I0317 13:15:28.892621  535048 cli_runner.go:164] Run: docker container inspect ha-840729-m02 --format={{.State.Status}}
	I0317 13:15:28.911383  535048 status.go:371] ha-840729-m02 host status = "Stopped" (err=<nil>)
	I0317 13:15:28.911410  535048 status.go:384] host is not running, skipping remaining checks
	I0317 13:15:28.911418  535048 status.go:176] ha-840729-m02 status: &{Name:ha-840729-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:15:28.911461  535048 status.go:174] checking status of ha-840729-m03 ...
	I0317 13:15:28.911739  535048 cli_runner.go:164] Run: docker container inspect ha-840729-m03 --format={{.State.Status}}
	I0317 13:15:28.932092  535048 status.go:371] ha-840729-m03 host status = "Running" (err=<nil>)
	I0317 13:15:28.932123  535048 host.go:66] Checking if "ha-840729-m03" exists ...
	I0317 13:15:28.932468  535048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-840729-m03
	I0317 13:15:28.951881  535048 host.go:66] Checking if "ha-840729-m03" exists ...
	I0317 13:15:28.952155  535048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:15:28.952201  535048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-840729-m03
	I0317 13:15:28.974162  535048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/ha-840729-m03/id_rsa Username:docker}
	I0317 13:15:29.069976  535048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:15:29.083480  535048 kubeconfig.go:125] found "ha-840729" server: "https://192.168.49.254:8443"
	I0317 13:15:29.083513  535048 api_server.go:166] Checking apiserver status ...
	I0317 13:15:29.083548  535048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:15:29.094974  535048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	I0317 13:15:29.105251  535048 api_server.go:182] apiserver freezer: "5:freezer:/docker/38804c089656a50ba54d378d3cdbe45dfd2c287a585246e39c1b850b8ada13bd/kubepods/burstable/pod9281474fc2904a04812321cca0883511/40d837e9e110e06702ac90e5690cc244b1b5f0f9ddc8ceab71199ff8c0096c8d"
	I0317 13:15:29.105331  535048 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/38804c089656a50ba54d378d3cdbe45dfd2c287a585246e39c1b850b8ada13bd/kubepods/burstable/pod9281474fc2904a04812321cca0883511/40d837e9e110e06702ac90e5690cc244b1b5f0f9ddc8ceab71199ff8c0096c8d/freezer.state
	I0317 13:15:29.114792  535048 api_server.go:204] freezer state: "THAWED"
	I0317 13:15:29.114846  535048 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0317 13:15:29.118908  535048 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0317 13:15:29.118941  535048 status.go:463] ha-840729-m03 apiserver status = Running (err=<nil>)
	I0317 13:15:29.118953  535048 status.go:176] ha-840729-m03 status: &{Name:ha-840729-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:15:29.118984  535048 status.go:174] checking status of ha-840729-m04 ...
	I0317 13:15:29.119250  535048 cli_runner.go:164] Run: docker container inspect ha-840729-m04 --format={{.State.Status}}
	I0317 13:15:29.139835  535048 status.go:371] ha-840729-m04 host status = "Running" (err=<nil>)
	I0317 13:15:29.139886  535048 host.go:66] Checking if "ha-840729-m04" exists ...
	I0317 13:15:29.140154  535048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-840729-m04
	I0317 13:15:29.159902  535048 host.go:66] Checking if "ha-840729-m04" exists ...
	I0317 13:15:29.160287  535048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:15:29.160367  535048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-840729-m04
	I0317 13:15:29.181832  535048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/ha-840729-m04/id_rsa Username:docker}
	I0317 13:15:29.281782  535048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:15:29.293677  535048 status.go:176] ha-840729-m04 status: &{Name:ha-840729-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (16.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-840729 node start m02 -v=7 --alsologtostderr: (15.275831268s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (16.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (109.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-840729 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-840729 -v=7 --alsologtostderr
E0317 13:16:11.435546  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:11.442102  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:11.453674  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:11.475284  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:11.516935  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:11.598541  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:11.760263  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:12.082223  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:12.724474  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-840729 -v=7 --alsologtostderr: (26.112577967s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-840729 --wait=true -v=7 --alsologtostderr
E0317 13:16:14.006250  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:16.569442  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:21.691007  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:31.932515  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:16:52.413893  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:17:14.568295  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:17:33.375541  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-840729 --wait=true -v=7 --alsologtostderr: (1m23.284996937s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-840729
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (109.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-840729 node delete m03 -v=7 --alsologtostderr: (8.599190205s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-840729 stop -v=7 --alsologtostderr: (36.18004451s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr: exit status 7 (115.584759ms)

                                                
                                                
-- stdout --
	ha-840729
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-840729-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-840729-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:18:23.146510  551913 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:18:23.146813  551913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:18:23.146825  551913 out.go:358] Setting ErrFile to fd 2...
	I0317 13:18:23.146829  551913 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:18:23.147027  551913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:18:23.147207  551913 out.go:352] Setting JSON to false
	I0317 13:18:23.147242  551913 mustload.go:65] Loading cluster: ha-840729
	I0317 13:18:23.147400  551913 notify.go:220] Checking for updates...
	I0317 13:18:23.147702  551913 config.go:182] Loaded profile config "ha-840729": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:18:23.147724  551913 status.go:174] checking status of ha-840729 ...
	I0317 13:18:23.148204  551913 cli_runner.go:164] Run: docker container inspect ha-840729 --format={{.State.Status}}
	I0317 13:18:23.170455  551913 status.go:371] ha-840729 host status = "Stopped" (err=<nil>)
	I0317 13:18:23.170493  551913 status.go:384] host is not running, skipping remaining checks
	I0317 13:18:23.170503  551913 status.go:176] ha-840729 status: &{Name:ha-840729 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:18:23.170540  551913 status.go:174] checking status of ha-840729-m02 ...
	I0317 13:18:23.170826  551913 cli_runner.go:164] Run: docker container inspect ha-840729-m02 --format={{.State.Status}}
	I0317 13:18:23.189270  551913 status.go:371] ha-840729-m02 host status = "Stopped" (err=<nil>)
	I0317 13:18:23.189316  551913 status.go:384] host is not running, skipping remaining checks
	I0317 13:18:23.189331  551913 status.go:176] ha-840729-m02 status: &{Name:ha-840729-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:18:23.189378  551913 status.go:174] checking status of ha-840729-m04 ...
	I0317 13:18:23.189689  551913 cli_runner.go:164] Run: docker container inspect ha-840729-m04 --format={{.State.Status}}
	I0317 13:18:23.208516  551913 status.go:371] ha-840729-m04 host status = "Stopped" (err=<nil>)
	I0317 13:18:23.208539  551913 status.go:384] host is not running, skipping remaining checks
	I0317 13:18:23.208546  551913 status.go:176] ha-840729-m04 status: &{Name:ha-840729-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (71.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-840729 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0317 13:18:55.297333  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-840729 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m11.110806959s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (71.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-840729 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-840729 --control-plane -v=7 --alsologtostderr: (36.228378348s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-840729 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.2s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-577473 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0317 13:21:11.435808  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-577473 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.203145659s)
--- PASS: TestJSONOutput/start/Command (51.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-577473 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-577473 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-577473 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-577473 --output=json --user=testUser: (5.769197551s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-170649 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-170649 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (75.58845ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5d53c5d7-91b5-4f4b-a5c3-84ef3f720064","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-170649] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d0eed0a7-466d-4d95-aa6b-e7ea441a3bab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20539"}}
	{"specversion":"1.0","id":"0a90d671-67f5-4928-b643-703724350737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b49ad682-8576-4581-bf1a-9a45c91ff852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig"}}
	{"specversion":"1.0","id":"0952f198-dc5d-4097-8e1e-8ca5d0f97f77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube"}}
	{"specversion":"1.0","id":"4eadfdc2-927f-41c6-9210-47e1a05c359c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1c6ec9f6-5b98-45b1-b7f6-58c2076bf76e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"998cbb2c-ed1b-4156-abe4-e894d7608868","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-170649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-170649
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-621177 --network=
E0317 13:21:39.140591  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-621177 --network=: (37.408588491s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-621177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-621177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-621177: (2.076802956s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.51s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-015490 --network=bridge
E0317 13:22:14.568602  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-015490 --network=bridge: (22.278010703s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-015490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-015490
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-015490: (1.978004341s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.28s)

                                                
                                    
x
+
TestKicExistingNetwork (27.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0317 13:22:29.021415  453732 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0317 13:22:29.039647  453732 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0317 13:22:29.039726  453732 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0317 13:22:29.039752  453732 cli_runner.go:164] Run: docker network inspect existing-network
W0317 13:22:29.057687  453732 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0317 13:22:29.057722  453732 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0317 13:22:29.057743  453732 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0317 13:22:29.057884  453732 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0317 13:22:29.076736  453732 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-42a82cf6e7f4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:92:6e:bf:a6:cf} reservation:<nil>}
I0317 13:22:29.077178  453732 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001adb160}
I0317 13:22:29.077211  453732 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0317 13:22:29.077259  453732 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0317 13:22:29.133127  453732 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-158369 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-158369 --network=existing-network: (24.914433502s)
helpers_test.go:175: Cleaning up "existing-network-158369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-158369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-158369: (2.013436322s)
I0317 13:22:56.080835  453732 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.08s)

                                                
                                    
x
+
TestKicCustomSubnet (28.62s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-185286 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-185286 --subnet=192.168.60.0/24: (26.544104503s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-185286 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-185286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-185286
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-185286: (2.048388066s)
--- PASS: TestKicCustomSubnet (28.62s)

                                                
                                    
x
+
TestKicStaticIP (27.63s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-054643 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-054643 --static-ip=192.168.200.200: (25.315784241s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-054643 ip
helpers_test.go:175: Cleaning up "static-ip-054643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-054643
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-054643: (2.174547837s)
--- PASS: TestKicStaticIP (27.63s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (54.91s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-456600 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-456600 --driver=docker  --container-runtime=containerd: (25.051582515s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-472078 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-472078 --driver=docker  --container-runtime=containerd: (24.37226822s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-456600
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-472078
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-472078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-472078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-472078: (1.954108316s)
helpers_test.go:175: Cleaning up "first-456600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-456600
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-456600: (2.292389741s)
--- PASS: TestMinikubeProfile (54.91s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-689154 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-689154 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.156518683s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-689154 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-709871 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-709871 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.728315116s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-709871 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-689154 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-689154 --alsologtostderr -v=5: (1.682072731s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-709871 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-709871
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-709871: (1.184229972s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-709871
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-709871: (6.90362717s)
--- PASS: TestMountStart/serial/RestartStopped (7.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-709871 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701285 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0317 13:25:17.643314  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:26:11.429756  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701285 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (59.819893507s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-701285 -- rollout status deployment/busybox: (17.345663589s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-2x4sn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-q24ch -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-2x4sn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-q24ch -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-2x4sn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-q24ch -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-2x4sn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-2x4sn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-q24ch -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-701285 -- exec busybox-58667487b6-q24ch -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-701285 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-701285 -v 3 --alsologtostderr: (17.410060689s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-701285 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp testdata/cp-test.txt multinode-701285:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1997160159/001/cp-test_multinode-701285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285:/home/docker/cp-test.txt multinode-701285-m02:/home/docker/cp-test_multinode-701285_multinode-701285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m02 "sudo cat /home/docker/cp-test_multinode-701285_multinode-701285-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285:/home/docker/cp-test.txt multinode-701285-m03:/home/docker/cp-test_multinode-701285_multinode-701285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m03 "sudo cat /home/docker/cp-test_multinode-701285_multinode-701285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp testdata/cp-test.txt multinode-701285-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1997160159/001/cp-test_multinode-701285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285-m02:/home/docker/cp-test.txt multinode-701285:/home/docker/cp-test_multinode-701285-m02_multinode-701285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285 "sudo cat /home/docker/cp-test_multinode-701285-m02_multinode-701285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285-m02:/home/docker/cp-test.txt multinode-701285-m03:/home/docker/cp-test_multinode-701285-m02_multinode-701285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m03 "sudo cat /home/docker/cp-test_multinode-701285-m02_multinode-701285-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp testdata/cp-test.txt multinode-701285-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1997160159/001/cp-test_multinode-701285-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285-m03:/home/docker/cp-test.txt multinode-701285:/home/docker/cp-test_multinode-701285-m03_multinode-701285.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285 "sudo cat /home/docker/cp-test_multinode-701285-m03_multinode-701285.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 cp multinode-701285-m03:/home/docker/cp-test.txt multinode-701285-m02:/home/docker/cp-test_multinode-701285-m03_multinode-701285-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 ssh -n multinode-701285-m02 "sudo cat /home/docker/cp-test_multinode-701285-m03_multinode-701285-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-701285 node stop m03: (1.20079224s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701285 status: exit status 7 (517.617793ms)

                                                
                                                
-- stdout --
	multinode-701285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-701285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-701285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr: exit status 7 (510.661523ms)

                                                
                                                
-- stdout --
	multinode-701285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-701285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-701285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:27:06.341404  616625 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:27:06.341717  616625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:27:06.341729  616625 out.go:358] Setting ErrFile to fd 2...
	I0317 13:27:06.341735  616625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:27:06.341953  616625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:27:06.342140  616625 out.go:352] Setting JSON to false
	I0317 13:27:06.342186  616625 mustload.go:65] Loading cluster: multinode-701285
	I0317 13:27:06.342268  616625 notify.go:220] Checking for updates...
	I0317 13:27:06.342697  616625 config.go:182] Loaded profile config "multinode-701285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:27:06.342726  616625 status.go:174] checking status of multinode-701285 ...
	I0317 13:27:06.343227  616625 cli_runner.go:164] Run: docker container inspect multinode-701285 --format={{.State.Status}}
	I0317 13:27:06.363528  616625 status.go:371] multinode-701285 host status = "Running" (err=<nil>)
	I0317 13:27:06.363562  616625 host.go:66] Checking if "multinode-701285" exists ...
	I0317 13:27:06.363864  616625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701285
	I0317 13:27:06.382980  616625 host.go:66] Checking if "multinode-701285" exists ...
	I0317 13:27:06.383294  616625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:27:06.383352  616625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701285
	I0317 13:27:06.405055  616625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/multinode-701285/id_rsa Username:docker}
	I0317 13:27:06.498204  616625 ssh_runner.go:195] Run: systemctl --version
	I0317 13:27:06.503070  616625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:27:06.515468  616625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:27:06.568953  616625 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-03-17 13:27:06.558472768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 13:27:06.569584  616625 kubeconfig.go:125] found "multinode-701285" server: "https://192.168.67.2:8443"
	I0317 13:27:06.569623  616625 api_server.go:166] Checking apiserver status ...
	I0317 13:27:06.569659  616625 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0317 13:27:06.581969  616625 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1469/cgroup
	I0317 13:27:06.593114  616625 api_server.go:182] apiserver freezer: "5:freezer:/docker/083c6355acab22894854d432b57696702fd2db0dc732e5010b88e92c2f77d9f4/kubepods/burstable/pod5b13a0a7236a53cd7dce16b0dc30c280/38e69990c558ea8ce07367b31f64be32b6c7c388a807eedba3541c7963adfb75"
	I0317 13:27:06.593214  616625 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/083c6355acab22894854d432b57696702fd2db0dc732e5010b88e92c2f77d9f4/kubepods/burstable/pod5b13a0a7236a53cd7dce16b0dc30c280/38e69990c558ea8ce07367b31f64be32b6c7c388a807eedba3541c7963adfb75/freezer.state
	I0317 13:27:06.603332  616625 api_server.go:204] freezer state: "THAWED"
	I0317 13:27:06.603371  616625 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0317 13:27:06.608303  616625 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0317 13:27:06.608388  616625 status.go:463] multinode-701285 apiserver status = Running (err=<nil>)
	I0317 13:27:06.608404  616625 status.go:176] multinode-701285 status: &{Name:multinode-701285 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:27:06.608451  616625 status.go:174] checking status of multinode-701285-m02 ...
	I0317 13:27:06.608745  616625 cli_runner.go:164] Run: docker container inspect multinode-701285-m02 --format={{.State.Status}}
	I0317 13:27:06.627878  616625 status.go:371] multinode-701285-m02 host status = "Running" (err=<nil>)
	I0317 13:27:06.627910  616625 host.go:66] Checking if "multinode-701285-m02" exists ...
	I0317 13:27:06.628221  616625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-701285-m02
	I0317 13:27:06.646777  616625 host.go:66] Checking if "multinode-701285-m02" exists ...
	I0317 13:27:06.647059  616625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0317 13:27:06.647122  616625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-701285-m02
	I0317 13:27:06.666695  616625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33290 SSHKeyPath:/home/jenkins/minikube-integration/20539-446828/.minikube/machines/multinode-701285-m02/id_rsa Username:docker}
	I0317 13:27:06.765909  616625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0317 13:27:06.778674  616625 status.go:176] multinode-701285-m02 status: &{Name:multinode-701285-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:27:06.778721  616625 status.go:174] checking status of multinode-701285-m03 ...
	I0317 13:27:06.779072  616625 cli_runner.go:164] Run: docker container inspect multinode-701285-m03 --format={{.State.Status}}
	I0317 13:27:06.798204  616625 status.go:371] multinode-701285-m03 host status = "Stopped" (err=<nil>)
	I0317 13:27:06.798234  616625 status.go:384] host is not running, skipping remaining checks
	I0317 13:27:06.798241  616625 status.go:176] multinode-701285-m03 status: &{Name:multinode-701285-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 node start m03 -v=7 --alsologtostderr
E0317 13:27:14.569328  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-701285 node start m03 -v=7 --alsologtostderr: (8.070308809s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-701285
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-701285
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-701285: (24.940669636s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701285 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701285 --wait=true -v=8 --alsologtostderr: (1m1.430719991s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-701285
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-701285 node delete m03: (4.524889112s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-701285 stop: (23.76247583s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701285 status: exit status 7 (96.623231ms)

                                                
                                                
-- stdout --
	multinode-701285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-701285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr: exit status 7 (95.822969ms)

                                                
                                                
-- stdout --
	multinode-701285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-701285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:29:11.153297  626397 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:29:11.153598  626397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:29:11.153608  626397 out.go:358] Setting ErrFile to fd 2...
	I0317 13:29:11.153613  626397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:29:11.153881  626397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:29:11.154064  626397 out.go:352] Setting JSON to false
	I0317 13:29:11.154106  626397 mustload.go:65] Loading cluster: multinode-701285
	I0317 13:29:11.154277  626397 notify.go:220] Checking for updates...
	I0317 13:29:11.154517  626397 config.go:182] Loaded profile config "multinode-701285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:29:11.154562  626397 status.go:174] checking status of multinode-701285 ...
	I0317 13:29:11.155145  626397 cli_runner.go:164] Run: docker container inspect multinode-701285 --format={{.State.Status}}
	I0317 13:29:11.176673  626397 status.go:371] multinode-701285 host status = "Stopped" (err=<nil>)
	I0317 13:29:11.176750  626397 status.go:384] host is not running, skipping remaining checks
	I0317 13:29:11.176760  626397 status.go:176] multinode-701285 status: &{Name:multinode-701285 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0317 13:29:11.176827  626397 status.go:174] checking status of multinode-701285-m02 ...
	I0317 13:29:11.177386  626397 cli_runner.go:164] Run: docker container inspect multinode-701285-m02 --format={{.State.Status}}
	I0317 13:29:11.197205  626397 status.go:371] multinode-701285-m02 host status = "Stopped" (err=<nil>)
	I0317 13:29:11.197231  626397 status.go:384] host is not running, skipping remaining checks
	I0317 13:29:11.197240  626397 status.go:176] multinode-701285-m02 status: &{Name:multinode-701285-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701285 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701285 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.427521581s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-701285 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-701285
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701285-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-701285-m02 --driver=docker  --container-runtime=containerd: exit status 14 (87.432702ms)

                                                
                                                
-- stdout --
	* [multinode-701285-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-701285-m02' is duplicated with machine name 'multinode-701285-m02' in profile 'multinode-701285'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-701285-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-701285-m03 --driver=docker  --container-runtime=containerd: (23.373795938s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-701285
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-701285: exit status 80 (295.998526ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-701285 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-701285-m03 already exists in multinode-701285-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-701285-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-701285-m03: (1.910835509s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.73s)

                                                
                                    
x
+
TestPreload (125.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-461927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0317 13:31:11.429551  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-461927 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m11.365502335s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-461927 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-461927 image pull gcr.io/k8s-minikube/busybox: (2.493849999s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-461927
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-461927: (12.005002699s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-461927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0317 13:32:14.568817  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:32:34.502655  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-461927 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (37.153190739s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-461927 image list
helpers_test.go:175: Cleaning up "test-preload-461927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-461927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-461927: (2.358886508s)
--- PASS: TestPreload (125.61s)

                                                
                                    
x
+
TestScheduledStopUnix (100.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-236874 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-236874 --memory=2048 --driver=docker  --container-runtime=containerd: (24.138040096s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236874 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-236874 -n scheduled-stop-236874
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236874 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0317 13:33:04.188488  453732 retry.go:31] will retry after 110.161µs: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.189766  453732 retry.go:31] will retry after 85.441µs: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.190984  453732 retry.go:31] will retry after 211.315µs: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.192217  453732 retry.go:31] will retry after 269.782µs: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.193457  453732 retry.go:31] will retry after 758.477µs: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.194668  453732 retry.go:31] will retry after 573.408µs: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.195861  453732 retry.go:31] will retry after 673.041µs: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.197083  453732 retry.go:31] will retry after 2.433984ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.200455  453732 retry.go:31] will retry after 2.507646ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.203842  453732 retry.go:31] will retry after 3.865798ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.208230  453732 retry.go:31] will retry after 4.886392ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.213640  453732 retry.go:31] will retry after 8.071241ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.222029  453732 retry.go:31] will retry after 19.079841ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.241240  453732 retry.go:31] will retry after 20.045567ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
I0317 13:33:04.261473  453732 retry.go:31] will retry after 40.168791ms: open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/scheduled-stop-236874/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236874 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-236874 -n scheduled-stop-236874
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-236874
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-236874 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-236874
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-236874: exit status 7 (76.093192ms)

                                                
                                                
-- stdout --
	scheduled-stop-236874
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-236874 -n scheduled-stop-236874
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-236874 -n scheduled-stop-236874: exit status 7 (71.895282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-236874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-236874
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-236874: (4.702031191s)
--- PASS: TestScheduledStopUnix (100.31s)

                                                
                                    
x
+
TestInsufficientStorage (12.84s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-353015 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-353015 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.372953597s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6683e989-12d7-42be-aa37-0017877c0500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-353015] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"25d2c51a-38ac-4e90-8643-c4b84312f557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20539"}}
	{"specversion":"1.0","id":"6cf5e54f-3058-41f3-82ad-56dcb11f8193","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c73191f5-5fe4-423e-9533-b17e3a393ce9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig"}}
	{"specversion":"1.0","id":"79e063e0-995f-45fe-afed-37707381329f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube"}}
	{"specversion":"1.0","id":"07860d4f-9658-4509-94e0-c0f8ea640acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"20463734-85f7-4c16-8bb7-53b28dd46e65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f8d84655-ac28-4d40-ab4d-be3deb7ac270","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6499ee9f-6793-46b0-897b-208839a082bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d6af4280-65b8-46d2-9379-c4a2ceea7bab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c2d3992-c959-45c0-a25e-09def31907fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"51356c86-6a56-46fb-a23a-eed4e9a6e6fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-353015\" primary control-plane node in \"insufficient-storage-353015\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1332b3c-903a-44af-a78a-42afc1e5f168","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1741860993-20523 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe64b21d-3c64-4e38-a1ff-4a37d3968f0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c310bae-25bc-47c1-ac84-6d6a329bdecf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-353015 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-353015 --output=json --layout=cluster: exit status 7 (285.65928ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-353015","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-353015","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:34:30.562094  649264 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-353015" does not appear in /home/jenkins/minikube-integration/20539-446828/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-353015 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-353015 --output=json --layout=cluster: exit status 7 (289.509107ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-353015","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-353015","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0317 13:34:30.851788  649362 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-353015" does not appear in /home/jenkins/minikube-integration/20539-446828/kubeconfig
	E0317 13:34:30.863532  649362 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/insufficient-storage-353015/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-353015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-353015
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-353015: (1.890385036s)
--- PASS: TestInsufficientStorage (12.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2321109564 start -p running-upgrade-437633 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2321109564 start -p running-upgrade-437633 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (28.18069345s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-437633 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-437633 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.8635479s)
helpers_test.go:175: Cleaning up "running-upgrade-437633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-437633
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-437633: (2.68509252s)
--- PASS: TestRunningBinaryUpgrade (65.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (323.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.752155534s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-455642
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-455642: (3.391149877s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-455642 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-455642 status --format={{.Host}}: exit status 7 (93.547826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m27.682776654s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-455642 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (75.1592ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-455642] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-455642
	    minikube start -p kubernetes-upgrade-455642 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4556422 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-455642 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-455642 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.397787079s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-455642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-455642
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-455642: (2.247574671s)
--- PASS: TestKubernetesUpgrade (323.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (96.57s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3839510185 start -p missing-upgrade-176618 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3839510185 start -p missing-upgrade-176618 --memory=2200 --driver=docker  --container-runtime=containerd: (28.662725938s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-176618
I0317 13:37:25.920377  453732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 13:37:25.920558  453732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0317 13:37:25.963720  453732 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0317 13:37:25.963780  453732 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0317 13:37:25.963910  453732 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0317 13:37:25.963958  453732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3483339833/002/docker-machine-driver-kvm2
I0317 13:37:26.022675  453732 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3483339833/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc0006a55a8 gz:0xc0006a5690 tar:0xc0006a5640 tar.bz2:0xc0006a5650 tar.gz:0xc0006a5660 tar.xz:0xc0006a5670 tar.zst:0xc0006a5680 tbz2:0xc0006a5650 tgz:0xc0006a5660 txz:0xc0006a5670 tzst:0xc0006a5680 xz:0xc0006a5698 zip:0xc0006a56a0 zst:0xc0006a56b0] Getters:map[file:0xc001c5ea80 http:0xc000721720 https:0xc000721770] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0317 13:37:26.022728  453732 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3483339833/002/docker-machine-driver-kvm2
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-176618: (10.336226852s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-176618
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-176618 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-176618 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.285928818s)
helpers_test.go:175: Cleaning up "missing-upgrade-176618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-176618
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-176618: (7.113608538s)
--- PASS: TestMissingContainerUpgrade (96.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-233356 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-233356 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (89.788527ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-233356] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-233356 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-233356 --driver=docker  --container-runtime=containerd: (34.337688219s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-233356 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1648244593 start -p stopped-upgrade-274113 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1648244593 start -p stopped-upgrade-274113 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m25.608742927s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1648244593 -p stopped-upgrade-274113 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1648244593 -p stopped-upgrade-274113 stop: (20.036260894s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-274113 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-274113 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.39927259s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-233356 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-233356 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.220337188s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-233356 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-233356 status -o json: exit status 2 (312.025582ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-233356","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-233356
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-233356: (1.971348229s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-233356 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-233356 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.557693284s)
--- PASS: TestNoKubernetes/serial/Start (5.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-233356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-233356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.603385ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.414569918s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (5.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-233356
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-233356: (1.863837966s)
--- PASS: TestNoKubernetes/serial/Stop (1.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-233356 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-233356 --driver=docker  --container-runtime=containerd: (7.828271165s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-233356 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-233356 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.042368ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-270902 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-270902 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (156.039393ms)

                                                
                                                
-- stdout --
	* [false-270902] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20539
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0317 13:36:12.771263  679630 out.go:345] Setting OutFile to fd 1 ...
	I0317 13:36:12.771374  679630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:36:12.771380  679630 out.go:358] Setting ErrFile to fd 2...
	I0317 13:36:12.771384  679630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0317 13:36:12.771593  679630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-446828/.minikube/bin
	I0317 13:36:12.772203  679630 out.go:352] Setting JSON to false
	I0317 13:36:12.773659  679630 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11913,"bootTime":1742206660,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0317 13:36:12.773814  679630 start.go:139] virtualization: kvm guest
	I0317 13:36:12.775743  679630 out.go:177] * [false-270902] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0317 13:36:12.776963  679630 out.go:177]   - MINIKUBE_LOCATION=20539
	I0317 13:36:12.776997  679630 notify.go:220] Checking for updates...
	I0317 13:36:12.779073  679630 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0317 13:36:12.780521  679630 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20539-446828/kubeconfig
	I0317 13:36:12.781883  679630 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-446828/.minikube
	I0317 13:36:12.783277  679630 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0317 13:36:12.784674  679630 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0317 13:36:12.786385  679630 config.go:182] Loaded profile config "cert-expiration-193618": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
	I0317 13:36:12.786491  679630 config.go:182] Loaded profile config "running-upgrade-437633": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0317 13:36:12.786566  679630 config.go:182] Loaded profile config "stopped-upgrade-274113": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0317 13:36:12.786672  679630 driver.go:394] Setting default libvirt URI to qemu:///system
	I0317 13:36:12.810868  679630 docker.go:123] docker version: linux-28.0.1:Docker Engine - Community
	I0317 13:36:12.810966  679630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0317 13:36:12.865587  679630 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:76 SystemTime:2025-03-17 13:36:12.853680584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.0.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0317 13:36:12.865758  679630 docker.go:318] overlay module found
	I0317 13:36:12.867568  679630 out.go:177] * Using the docker driver based on user configuration
	I0317 13:36:12.869165  679630 start.go:297] selected driver: docker
	I0317 13:36:12.869189  679630 start.go:901] validating driver "docker" against <nil>
	I0317 13:36:12.869207  679630 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0317 13:36:12.871753  679630 out.go:201] 
	W0317 13:36:12.872961  679630 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0317 13:36:12.874261  679630 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-270902 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-270902" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Mar 2025 13:35:53 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-193618
contexts:
- context:
cluster: cert-expiration-193618
extensions:
- extension:
last-update: Mon, 17 Mar 2025 13:35:53 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-193618
name: cert-expiration-193618
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-193618
user:
client-certificate: /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/cert-expiration-193618/client.crt
client-key: /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/cert-expiration-193618/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-270902

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-270902"

                                                
                                                
----------------------- debugLogs end: false-270902 [took: 3.318333785s] --------------------------------
helpers_test.go:175: Cleaning up "false-270902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-270902
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestPause/serial/Start (45.72s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-553776 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-553776 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (45.719798423s)
--- PASS: TestPause/serial/Start (45.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-274113
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-274113: (2.357534098s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.72s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-553776 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-553776 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.703218816s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.72s)

                                                
                                    
x
+
TestPause/serial/Pause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-553776 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.80s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-553776 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-553776 --output=json --layout=cluster: exit status 2 (406.276633ms)

                                                
                                                
-- stdout --
	{"Name":"pause-553776","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-553776","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-553776 --alsologtostderr -v=5
E0317 13:37:14.569209  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestPause/serial/Unpause (0.75s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.9s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-553776 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.90s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-553776 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-553776 --alsologtostderr -v=5: (5.991214736s)
--- PASS: TestPause/serial/DeletePaused (5.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.84s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-553776
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-553776: exit status 1 (31.621533ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-553776: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-104810 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-104810 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (1m49.627407612s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-470335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-470335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (1m1.25095868s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-783950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-783950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (52.323915967s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-104810 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9cf48ca0-886d-4ee1-8bd5-5bc487a645d4] Pending
helpers_test.go:344: "busybox" [9cf48ca0-886d-4ee1-8bd5-5bc487a645d4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9cf48ca0-886d-4ee1-8bd5-5bc487a645d4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003335745s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-104810 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-104810 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-104810 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-104810 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-104810 --alsologtostderr -v=3: (12.084337076s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-470335 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e9380c2f-3b87-470f-8003-fe77768d4928] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e9380c2f-3b87-470f-8003-fe77768d4928] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004123615s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-470335 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104810 -n old-k8s-version-104810
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104810 -n old-k8s-version-104810: exit status 7 (80.030571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-104810 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (29.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-104810 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-104810 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (28.832051608s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-104810 -n old-k8s-version-104810
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (29.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-470335 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-470335 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-470335 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-470335 --alsologtostderr -v=3: (13.231236462s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-783950 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f88f2550-b521-4e49-a432-45421ba6eccc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f88f2550-b521-4e49-a432-45421ba6eccc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004493828s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-783950 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-470335 -n no-preload-470335
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-470335 -n no-preload-470335: exit status 7 (81.088145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-470335 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (264.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-470335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-470335 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m23.866030017s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-470335 -n no-preload-470335
E0317 13:44:20.203622  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (264.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-783950 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-783950 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105933071s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-783950 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-783950 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-783950 --alsologtostderr -v=3: (13.132721677s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-94m79" [266aa26b-8d6f-429a-acbb-b115bba6a32a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-94m79" [266aa26b-8d6f-429a-acbb-b115bba6a32a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 26.003174007s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783950 -n embed-certs-783950
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783950 -n embed-certs-783950: exit status 7 (93.177052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-783950 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (265.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-783950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-783950 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m24.827277704s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-783950 -n embed-certs-783950
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (265.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-94m79" [266aa26b-8d6f-429a-acbb-b115bba6a32a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004930553s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-104810 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-104810 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-104810 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104810 -n old-k8s-version-104810
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104810 -n old-k8s-version-104810: exit status 2 (325.781439ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-104810 -n old-k8s-version-104810
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-104810 -n old-k8s-version-104810: exit status 2 (322.312995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-104810 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-104810 -n old-k8s-version-104810
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-104810 -n old-k8s-version-104810
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-333449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0317 13:41:11.430137  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-333449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (45.864186277s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (45.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-333449 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4f71411d-872a-454e-8da7-b8f9114ce96f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4f71411d-872a-454e-8da7-b8f9114ce96f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003843708s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-333449 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-333449 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-333449 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-333449 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-333449 --alsologtostderr -v=3: (11.979486667s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449: exit status 7 (78.208665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-333449 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-333449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
E0317 13:41:57.644924  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:42:14.568752  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-333449 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (4m24.71659984s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-402500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-402500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (30.987880431s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-402500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-402500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-402500 --alsologtostderr -v=3: (1.838531333s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-402500 -n newest-cni-402500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-402500 -n newest-cni-402500: exit status 7 (75.439525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-402500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-402500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-402500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.2: (13.156324411s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-402500 -n newest-cni-402500
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-402500 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-402500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-402500 -n newest-cni-402500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-402500 -n newest-cni-402500: exit status 2 (329.211754ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-402500 -n newest-cni-402500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-402500 -n newest-cni-402500: exit status 2 (331.700783ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-402500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-402500 -n newest-cni-402500
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-402500 -n newest-cni-402500
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0317 13:44:17.632972  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:44:17.639476  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:44:17.651016  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:44:17.672487  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:44:17.714658  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:44:17.796171  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
E0317 13:44:17.958014  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (55.007613327s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-270902 "pgrep -a kubelet"
E0317 13:44:18.279863  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
I0317 13:44:18.447514  453732 config.go:182] Loaded profile config "auto-270902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-270902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-qvfjd" [744b8ca0-1e38-42aa-8bda-03613ada42e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0317 13:44:18.922111  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-qvfjd" [744b8ca0-1e38-42aa-8bda-03613ada42e5] Running
E0317 13:44:22.765506  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00360938s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tt6fp" [51982933-d67e-4ac1-b517-c0d77ed7202a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003223022s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tt6fp" [51982933-d67e-4ac1-b517-c0d77ed7202a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003863621s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-470335 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-270902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0317 13:44:27.886876  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-470335 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-470335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-470335 -n no-preload-470335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-470335 -n no-preload-470335: exit status 2 (346.658319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-470335 -n no-preload-470335
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-470335 -n no-preload-470335: exit status 2 (335.641844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-470335 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-470335 -n no-preload-470335
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-470335 -n no-preload-470335
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0317 13:44:38.128789  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m0.129895074s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wpn5c" [47598fa7-5ddf-47c1-9912-44fc22e2b92f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003921797s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (56.35781134s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-wpn5c" [47598fa7-5ddf-47c1-9912-44fc22e2b92f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003950444s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-783950 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-783950 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-783950 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-783950 -n embed-certs-783950
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-783950 -n embed-certs-783950: exit status 2 (361.361372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-783950 -n embed-certs-783950
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-783950 -n embed-certs-783950: exit status 2 (376.020104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-783950 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-783950 -n embed-certs-783950
E0317 13:44:58.611086  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-783950 -n embed-certs-783950
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (43.978327424s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-l5ghs" [5f1ec357-3cdd-47d3-8fe0-7bad82a71d83] Running
E0317 13:45:39.573659  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004157346s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-270902 "pgrep -a kubelet"
I0317 13:45:44.267817  453732 config.go:182] Loaded profile config "kindnet-270902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-270902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7zlsw" [f4ac4ac3-5fd8-4dda-8411-4fd287bddfe0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7zlsw" [f4ac4ac3-5fd8-4dda-8411-4fd287bddfe0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004701952s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v9pzk" [de57ba93-3c7b-44b9-a4ea-6123f000d5b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004372244s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-270902 "pgrep -a kubelet"
I0317 13:45:46.756479  453732 config.go:182] Loaded profile config "custom-flannel-270902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-270902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-lm59p" [761edf8b-98c6-4ac7-aef9-ad4285e3d4a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-lm59p" [761edf8b-98c6-4ac7-aef9-ad4285e3d4a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004055216s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-270902 "pgrep -a kubelet"
I0317 13:45:51.288063  453732 config.go:182] Loaded profile config "calico-270902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-270902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-slvtl" [459768fe-d52d-4811-85b3-92fe95e859a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-slvtl" [459768fe-d52d-4811-85b3-92fe95e859a6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004481621s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-270902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-270902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-270902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m10.461365977s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (45.9944847s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jmjsr" [0fbb6ae1-fd2a-4465-a156-0709d87c0b2b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004295346s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (69.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-270902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m9.214797523s)
--- PASS: TestNetworkPlugins/group/bridge/Start (69.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-jmjsr" [0fbb6ae1-fd2a-4465-a156-0709d87c0b2b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004091105s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-333449 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-333449 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250214-acbabc1a
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-333449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449: exit status 2 (395.68824ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449: exit status 2 (396.002613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-333449 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-333449 -n default-k8s-diff-port-333449
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.42s)
E0317 13:47:01.495353  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/old-k8s-version-104810/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gptb8" [6ec3e7a5-81cd-4958-b190-18910a944a9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003499129s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-270902 "pgrep -a kubelet"
I0317 13:47:11.645910  453732 config.go:182] Loaded profile config "flannel-270902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-270902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-k6k6v" [e2e21f9d-89e7-4daf-ba4f-2fe74025de72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-k6k6v" [e2e21f9d-89e7-4daf-ba4f-2fe74025de72] Running
E0317 13:47:14.568489  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/addons-012219/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003873331s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-270902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-270902 "pgrep -a kubelet"
I0317 13:47:26.282698  453732 config.go:182] Loaded profile config "enable-default-cni-270902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-270902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-f5knq" [0b26ca22-4c72-44e9-befc-8599b6182ae5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-f5knq" [0b26ca22-4c72-44e9-befc-8599b6182ae5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00414096s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-270902 "pgrep -a kubelet"
I0317 13:47:33.273219  453732 config.go:182] Loaded profile config "bridge-270902": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-270902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7pfmr" [2c95b160-3e92-4399-9798-de8d48ad4b7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-7pfmr" [2c95b160-3e92-4399-9798-de8d48ad4b7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004313775s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-270902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-270902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-270902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (25/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-701674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-701674
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E0317 13:36:11.429172  453732 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/functional-207072/client.crt: no such file or directory" logger="UnhandledError"
panic.go:631: 
----------------------- debugLogs start: kubenet-270902 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-270902" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Mar 2025 13:35:53 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-193618
contexts:
- context:
cluster: cert-expiration-193618
extensions:
- extension:
last-update: Mon, 17 Mar 2025 13:35:53 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-193618
name: cert-expiration-193618
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-193618
user:
client-certificate: /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/cert-expiration-193618/client.crt
client-key: /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/cert-expiration-193618/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-270902

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-270902"

                                                
                                                
----------------------- debugLogs end: kubenet-270902 [took: 3.483281304s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-270902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-270902
--- SKIP: TestNetworkPlugins/group/kubenet (3.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-270902 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-270902" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20539-446828/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 17 Mar 2025 13:35:53 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-193618
contexts:
- context:
cluster: cert-expiration-193618
extensions:
- extension:
last-update: Mon, 17 Mar 2025 13:35:53 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-193618
name: cert-expiration-193618
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-193618
user:
client-certificate: /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/cert-expiration-193618/client.crt
client-key: /home/jenkins/minikube-integration/20539-446828/.minikube/profiles/cert-expiration-193618/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-270902

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-270902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-270902"

                                                
                                                
----------------------- debugLogs end: cilium-270902 [took: 3.83750257s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-270902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-270902
--- SKIP: TestNetworkPlugins/group/cilium (4.01s)

                                                
                                    
Copied to clipboard