Test Report: Docker_Linux 20598

                    
                      63c1754226199ce281e4ac8e931674d5ef457043:2025-04-07:39038
                    
                

Test fail (12/345)

x
+
TestAddons/parallel/LocalPath (229.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-662808 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-662808 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d80409e4-1900-4a8f-9c48-4e8e81479f9a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
addons_test.go:901: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:901: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-662808 -n addons-662808
addons_test.go:901: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-04-07 12:55:37.189600628 +0000 UTC m=+486.935123929
addons_test.go:901: (dbg) Run:  kubectl --context addons-662808 describe po test-local-path -n default
addons_test.go:901: (dbg) kubectl --context addons-662808 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-662808/192.168.49.2
Start Time:       Mon, 07 Apr 2025 12:52:36 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.36
IPs:
IP:  10.244.0.36
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5ffsn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-5ffsn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m1s                  default-scheduler  Successfully assigned default/test-local-path to addons-662808
Warning  Failed     98s (x4 over 3m)      kubelet            Failed to pull image "busybox:stable": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    24s (x11 over 2m59s)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     24s (x11 over 2m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    9s (x5 over 3m)       kubelet            Pulling image "busybox:stable"
Warning  Failed     9s (x5 over 3m)       kubelet            Error: ErrImagePull
Warning  Failed     9s                    kubelet            Failed to pull image "busybox:stable": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
addons_test.go:901: (dbg) Run:  kubectl --context addons-662808 logs test-local-path -n default
addons_test.go:901: (dbg) Non-zero exit: kubectl --context addons-662808 logs test-local-path -n default: exit status 1 (69.649998ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:901: kubectl --context addons-662808 logs test-local-path -n default: exit status 1
addons_test.go:902: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-662808
helpers_test.go:235: (dbg) docker inspect addons-662808:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed",
	        "Created": "2025-04-07T12:48:05.508902626Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 775267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-07T12:48:05.541753081Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:037bd1b5a0f63899880a74b20d0e40b693fd199ade4ed9b883be5ed5726d15a6",
	        "ResolvConfPath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/hosts",
	        "LogPath": "/var/lib/docker/containers/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed/99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed-json.log",
	        "Name": "/addons-662808",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-662808:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-662808",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "99376af8541bfcd1d208b4d57cccef4b5cb47011d904401f29109a155929b2ed",
	                "LowerDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae-init/diff:/var/lib/docker/overlay2/4ad95e7f4a49b487176ca9dc3e3437ef3df8ea71a4a72c4a666a7db5084d5e6d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a846772ad06386bb75dad5378a7df5577e11414d9a23e93d517b8eeb5bdf1dae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-662808",
	                "Source": "/var/lib/docker/volumes/addons-662808/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-662808",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-662808",
	                "name.minikube.sigs.k8s.io": "addons-662808",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d1065e047604bb500bed362c0162463a013e167276aef9048480f5b852e254f",
	            "SandboxKey": "/var/run/docker/netns/7d1065e04760",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-662808": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:1f:37:9a:f9:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "87157fb96bf148188bf8cec10e52372da9869a32414022d777f4b879d54fa585",
	                    "EndpointID": "0bf267d4e286ac9e4068c794eca5091e45e3f276a38dc22b011ce4e630715f66",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-662808",
	                        "99376af8541b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-662808 -n addons-662808
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 logs -n 25
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                   | download-docker-089924 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC |                     |
	|         | download-docker-089924               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-089924            | download-docker-089924 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:47 UTC |
	| start   | --download-only -p                   | binary-mirror-222869   | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC |                     |
	|         | binary-mirror-222869                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43139               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-222869              | binary-mirror-222869   | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:47 UTC |
	| addons  | disable dashboard -p                 | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC |                     |
	|         | addons-662808                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC |                     |
	|         | addons-662808                        |                        |         |         |                     |                     |
	| start   | -p addons-662808 --wait=true         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:51 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:51 UTC | 07 Apr 25 12:51 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | -p addons-662808                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-662808 addons                 | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-662808 addons                 | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-662808 ip                     | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-662808 addons                 | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-662808 addons                 | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | disable cloud-spanner                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ssh     | addons-662808 ssh curl -s            | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-662808 ip                     | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-662808 addons                 | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-662808 addons disable         | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-662808 addons                 | addons-662808          | jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:47:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:47:42.068395  774657 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:47:42.068917  774657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:42.068935  774657 out.go:358] Setting ErrFile to fd 2...
	I0407 12:47:42.068942  774657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:42.069206  774657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 12:47:42.069837  774657 out.go:352] Setting JSON to false
	I0407 12:47:42.070697  774657 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":73811,"bootTime":1743956251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:47:42.070800  774657 start.go:139] virtualization: kvm guest
	I0407 12:47:42.072634  774657 out.go:177] * [addons-662808] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:47:42.073939  774657 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:47:42.073941  774657 notify.go:220] Checking for updates...
	I0407 12:47:42.075298  774657 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:47:42.076576  774657 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:47:42.077656  774657 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	I0407 12:47:42.078795  774657 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:47:42.079934  774657 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:47:42.081457  774657 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:47:42.103349  774657 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:47:42.103496  774657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:42.151222  774657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:47:42.142912406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:47:42.151320  774657 docker.go:318] overlay module found
	I0407 12:47:42.153113  774657 out.go:177] * Using the docker driver based on user configuration
	I0407 12:47:42.154438  774657 start.go:297] selected driver: docker
	I0407 12:47:42.154454  774657 start.go:901] validating driver "docker" against <nil>
	I0407 12:47:42.154466  774657 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:47:42.155175  774657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:42.203396  774657 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-04-07 12:47:42.195402372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:47:42.203632  774657 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:47:42.203839  774657 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:47:42.205386  774657 out.go:177] * Using Docker driver with root privileges
	I0407 12:47:42.206475  774657 cni.go:84] Creating CNI manager for ""
	I0407 12:47:42.206538  774657 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:47:42.206548  774657 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:47:42.206609  774657 start.go:340] cluster config:
	{Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:47:42.207789  774657 out.go:177] * Starting "addons-662808" primary control-plane node in "addons-662808" cluster
	I0407 12:47:42.208797  774657 cache.go:121] Beginning downloading kic base image for docker with docker
	I0407 12:47:42.209864  774657 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
	I0407 12:47:42.210872  774657 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:47:42.210905  774657 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
	I0407 12:47:42.210918  774657 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 12:47:42.210927  774657 cache.go:56] Caching tarball of preloaded images
	I0407 12:47:42.211004  774657 preload.go:172] Found /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 12:47:42.211017  774657 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 12:47:42.211350  774657 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/config.json ...
	I0407 12:47:42.211384  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/config.json: {Name:mk68064d92eeeab5e23dc5c9eec6bb53756c9e10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:47:42.226207  774657 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 to local cache
	I0407 12:47:42.226309  774657 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory
	I0407 12:47:42.226330  774657 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local cache directory, skipping pull
	I0407 12:47:42.226336  774657 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in cache, skipping pull
	I0407 12:47:42.226348  774657 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 as a tarball
	I0407 12:47:42.226359  774657 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 from local cache
	I0407 12:47:54.176527  774657 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 from cached tarball
	I0407 12:47:54.176572  774657 cache.go:230] Successfully downloaded all kic artifacts
	I0407 12:47:54.176619  774657 start.go:360] acquireMachinesLock for addons-662808: {Name:mkbe122773630acbb9c50768cde9ae1b5a1617df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:47:54.176730  774657 start.go:364] duration metric: took 89.433µs to acquireMachinesLock for "addons-662808"
	I0407 12:47:54.176760  774657 start.go:93] Provisioning new machine with config: &{Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:47:54.176918  774657 start.go:125] createHost starting for "" (driver="docker")
	I0407 12:47:54.178582  774657 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0407 12:47:54.178883  774657 start.go:159] libmachine.API.Create for "addons-662808" (driver="docker")
	I0407 12:47:54.178921  774657 client.go:168] LocalClient.Create starting
	I0407 12:47:54.179033  774657 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem
	I0407 12:47:54.442327  774657 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem
	I0407 12:47:54.550841  774657 cli_runner.go:164] Run: docker network inspect addons-662808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0407 12:47:54.566844  774657 cli_runner.go:211] docker network inspect addons-662808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0407 12:47:54.566917  774657 network_create.go:284] running [docker network inspect addons-662808] to gather additional debugging logs...
	I0407 12:47:54.566935  774657 cli_runner.go:164] Run: docker network inspect addons-662808
	W0407 12:47:54.582079  774657 cli_runner.go:211] docker network inspect addons-662808 returned with exit code 1
	I0407 12:47:54.582123  774657 network_create.go:287] error running [docker network inspect addons-662808]: docker network inspect addons-662808: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-662808 not found
	I0407 12:47:54.582146  774657 network_create.go:289] output of [docker network inspect addons-662808]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-662808 not found
	
	** /stderr **
	I0407 12:47:54.582219  774657 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 12:47:54.598125  774657 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00167d130}
	I0407 12:47:54.598182  774657 network_create.go:124] attempt to create docker network addons-662808 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0407 12:47:54.598253  774657 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-662808 addons-662808
	I0407 12:47:54.646285  774657 network_create.go:108] docker network addons-662808 192.168.49.0/24 created
	I0407 12:47:54.646329  774657 kic.go:121] calculated static IP "192.168.49.2" for the "addons-662808" container
	I0407 12:47:54.646406  774657 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0407 12:47:54.661512  774657 cli_runner.go:164] Run: docker volume create addons-662808 --label name.minikube.sigs.k8s.io=addons-662808 --label created_by.minikube.sigs.k8s.io=true
	I0407 12:47:54.677853  774657 oci.go:103] Successfully created a docker volume addons-662808
	I0407 12:47:54.677933  774657 cli_runner.go:164] Run: docker run --rm --name addons-662808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-662808 --entrypoint /usr/bin/test -v addons-662808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib
	I0407 12:48:01.485343  774657 cli_runner.go:217] Completed: docker run --rm --name addons-662808-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-662808 --entrypoint /usr/bin/test -v addons-662808:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib: (6.807348151s)
	I0407 12:48:01.485394  774657 oci.go:107] Successfully prepared a docker volume addons-662808
	I0407 12:48:01.485432  774657 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:48:01.485466  774657 kic.go:194] Starting extracting preloaded images to volume ...
	I0407 12:48:01.485545  774657 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-662808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir
	I0407 12:48:05.446250  774657 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-662808:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir: (3.960640639s)
	I0407 12:48:05.446283  774657 kic.go:203] duration metric: took 3.960814298s to extract preloaded images to volume ...
	W0407 12:48:05.446433  774657 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0407 12:48:05.446552  774657 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0407 12:48:05.493309  774657 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-662808 --name addons-662808 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-662808 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-662808 --network addons-662808 --ip 192.168.49.2 --volume addons-662808:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727
	I0407 12:48:05.774095  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Running}}
	I0407 12:48:05.792278  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:05.810193  774657 cli_runner.go:164] Run: docker exec addons-662808 stat /var/lib/dpkg/alternatives/iptables
	I0407 12:48:05.852163  774657 oci.go:144] the created container "addons-662808" has a running status.
	I0407 12:48:05.852195  774657 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa...
	I0407 12:48:05.978862  774657 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0407 12:48:06.000001  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:06.020298  774657 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0407 12:48:06.020323  774657 kic_runner.go:114] Args: [docker exec --privileged addons-662808 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0407 12:48:06.060596  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:06.079717  774657 machine.go:93] provisionDockerMachine start ...
	I0407 12:48:06.079853  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:06.099291  774657 main.go:141] libmachine: Using SSH client type: native
	I0407 12:48:06.099639  774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0407 12:48:06.099675  774657 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 12:48:06.100553  774657 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59594->127.0.0.1:32768: read: connection reset by peer
	I0407 12:48:09.222881  774657 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-662808
	
	I0407 12:48:09.222916  774657 ubuntu.go:169] provisioning hostname "addons-662808"
	I0407 12:48:09.222976  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:09.239971  774657 main.go:141] libmachine: Using SSH client type: native
	I0407 12:48:09.240210  774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0407 12:48:09.240228  774657 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-662808 && echo "addons-662808" | sudo tee /etc/hostname
	I0407 12:48:09.369973  774657 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-662808
	
	I0407 12:48:09.370040  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:09.386741  774657 main.go:141] libmachine: Using SSH client type: native
	I0407 12:48:09.387013  774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0407 12:48:09.387038  774657 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-662808' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-662808/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-662808' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 12:48:09.507414  774657 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:48:09.507468  774657 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20598-766623/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-766623/.minikube}
	I0407 12:48:09.507500  774657 ubuntu.go:177] setting up certificates
	I0407 12:48:09.507521  774657 provision.go:84] configureAuth start
	I0407 12:48:09.507594  774657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-662808
	I0407 12:48:09.524163  774657 provision.go:143] copyHostCerts
	I0407 12:48:09.524248  774657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-766623/.minikube/key.pem (1675 bytes)
	I0407 12:48:09.524361  774657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-766623/.minikube/ca.pem (1078 bytes)
	I0407 12:48:09.524445  774657 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-766623/.minikube/cert.pem (1123 bytes)
	I0407 12:48:09.524501  774657 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-766623/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca-key.pem org=jenkins.addons-662808 san=[127.0.0.1 192.168.49.2 addons-662808 localhost minikube]
	I0407 12:48:09.901216  774657 provision.go:177] copyRemoteCerts
	I0407 12:48:09.901279  774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 12:48:09.901316  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:09.918149  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:10.008028  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 12:48:10.029806  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 12:48:10.050777  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 12:48:10.071779  774657 provision.go:87] duration metric: took 564.238868ms to configureAuth
	I0407 12:48:10.071812  774657 ubuntu.go:193] setting minikube options for container-runtime
	I0407 12:48:10.071994  774657 config.go:182] Loaded profile config "addons-662808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:48:10.072050  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:10.088690  774657 main.go:141] libmachine: Using SSH client type: native
	I0407 12:48:10.088919  774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0407 12:48:10.088931  774657 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 12:48:10.215747  774657 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0407 12:48:10.215783  774657 ubuntu.go:71] root file system type: overlay
	I0407 12:48:10.215937  774657 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 12:48:10.216016  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:10.232577  774657 main.go:141] libmachine: Using SSH client type: native
	I0407 12:48:10.232838  774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0407 12:48:10.232945  774657 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 12:48:10.365819  774657 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 12:48:10.365904  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:10.382220  774657 main.go:141] libmachine: Using SSH client type: native
	I0407 12:48:10.382479  774657 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0407 12:48:10.382499  774657 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 12:48:11.095723  774657 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-03-25 15:05:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-04-07 12:48:10.360505775 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0407 12:48:11.095755  774657 machine.go:96] duration metric: took 5.016014488s to provisionDockerMachine
	I0407 12:48:11.095767  774657 client.go:171] duration metric: took 16.916836853s to LocalClient.Create
	I0407 12:48:11.095785  774657 start.go:167] duration metric: took 16.916902688s to libmachine.API.Create "addons-662808"
	I0407 12:48:11.095792  774657 start.go:293] postStartSetup for "addons-662808" (driver="docker")
	I0407 12:48:11.095802  774657 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:48:11.095866  774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:48:11.095907  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:11.113234  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:11.204318  774657 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 12:48:11.207292  774657 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0407 12:48:11.207320  774657 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0407 12:48:11.207328  774657 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0407 12:48:11.207334  774657 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0407 12:48:11.207344  774657 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-766623/.minikube/addons for local assets ...
	I0407 12:48:11.207409  774657 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-766623/.minikube/files for local assets ...
	I0407 12:48:11.207451  774657 start.go:296] duration metric: took 111.650938ms for postStartSetup
	I0407 12:48:11.207755  774657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-662808
	I0407 12:48:11.224375  774657 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/config.json ...
	I0407 12:48:11.224597  774657 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 12:48:11.224637  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:11.240462  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:11.328174  774657 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0407 12:48:11.332136  774657 start.go:128] duration metric: took 17.155193027s to createHost
	I0407 12:48:11.332163  774657 start.go:83] releasing machines lock for "addons-662808", held for 17.155414505s
	I0407 12:48:11.332230  774657 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-662808
	I0407 12:48:11.348838  774657 ssh_runner.go:195] Run: cat /version.json
	I0407 12:48:11.348875  774657 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0407 12:48:11.348886  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:11.348945  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:11.366682  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:11.368377  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:11.451021  774657 ssh_runner.go:195] Run: systemctl --version
	I0407 12:48:11.524889  774657 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 12:48:11.529205  774657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0407 12:48:11.551381  774657 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0407 12:48:11.551468  774657 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:48:11.574270  774657 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0407 12:48:11.574303  774657 start.go:495] detecting cgroup driver to use...
	I0407 12:48:11.574340  774657 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:48:11.574460  774657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:48:11.588418  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 12:48:11.597085  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 12:48:11.605784  774657 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 12:48:11.605842  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 12:48:11.614567  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:48:11.622869  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 12:48:11.631098  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:48:11.639571  774657 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:48:11.647503  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 12:48:11.655901  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 12:48:11.664489  774657 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 12:48:11.673027  774657 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:48:11.680114  774657 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 12:48:11.680157  774657 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 12:48:11.692387  774657 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:48:11.700571  774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:48:11.774499  774657 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 12:48:11.857329  774657 start.go:495] detecting cgroup driver to use...
	I0407 12:48:11.857443  774657 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0407 12:48:11.857518  774657 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 12:48:11.868417  774657 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0407 12:48:11.868471  774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 12:48:11.879128  774657 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:48:11.895683  774657 ssh_runner.go:195] Run: which cri-dockerd
	I0407 12:48:11.899010  774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 12:48:11.908005  774657 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 12:48:11.929858  774657 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 12:48:12.029791  774657 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 12:48:12.121575  774657 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 12:48:12.121717  774657 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 12:48:12.138730  774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:48:12.220571  774657 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 12:48:12.500686  774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 12:48:12.511702  774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:48:12.522103  774657 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 12:48:12.602261  774657 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 12:48:12.679105  774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:48:12.750592  774657 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 12:48:12.762496  774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:48:12.771946  774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:48:12.846916  774657 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 12:48:12.905360  774657 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 12:48:12.905445  774657 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 12:48:12.909355  774657 start.go:563] Will wait 60s for crictl version
	I0407 12:48:12.909419  774657 ssh_runner.go:195] Run: which crictl
	I0407 12:48:12.912591  774657 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 12:48:12.944347  774657 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.0.4
	RuntimeApiVersion:  v1
	I0407 12:48:12.944423  774657 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:48:12.967365  774657 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:48:12.992089  774657 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
	I0407 12:48:12.992170  774657 cli_runner.go:164] Run: docker network inspect addons-662808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0407 12:48:13.008275  774657 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0407 12:48:13.011925  774657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:48:13.021905  774657 kubeadm.go:883] updating cluster {Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:48:13.022022  774657 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:48:13.022076  774657 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:48:13.040590  774657 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 12:48:13.040627  774657 docker.go:619] Images already preloaded, skipping extraction
	I0407 12:48:13.040707  774657 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:48:13.059755  774657 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 12:48:13.059778  774657 cache_images.go:84] Images are preloaded, skipping loading
	I0407 12:48:13.059789  774657 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 docker true true} ...
	I0407 12:48:13.059876  774657 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-662808 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 12:48:13.059925  774657 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 12:48:13.103155  774657 cni.go:84] Creating CNI manager for ""
	I0407 12:48:13.103191  774657 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:48:13.103214  774657 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:48:13.103250  774657 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-662808 NodeName:addons-662808 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:48:13.103399  774657 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-662808"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:48:13.103497  774657 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:48:13.111876  774657 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 12:48:13.111938  774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 12:48:13.119863  774657 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 12:48:13.135667  774657 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:48:13.151175  774657 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0407 12:48:13.166535  774657 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0407 12:48:13.169415  774657 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:48:13.178899  774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:48:13.251549  774657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:48:13.263762  774657 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808 for IP: 192.168.49.2
	I0407 12:48:13.263789  774657 certs.go:194] generating shared ca certs ...
	I0407 12:48:13.263809  774657 certs.go:226] acquiring lock for ca certs: {Name:mk3cba72d8e0a281d2351f9394ddea5be5fe0baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.263953  774657 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key
	I0407 12:48:13.385095  774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt ...
	I0407 12:48:13.385127  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt: {Name:mk893306cac75a6632c3479250f37deaf8ffa61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.385288  774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key ...
	I0407 12:48:13.385299  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key: {Name:mk83feba8c710103c1fe8fbb1c81e9479f11811c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.385374  774657 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key
	I0407 12:48:13.571423  774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.crt ...
	I0407 12:48:13.571463  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.crt: {Name:mk652d34a62049fb2318a1e2d757c0f3d3e66935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.571622  774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key ...
	I0407 12:48:13.571633  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key: {Name:mk87f2f162271f48556e1a0132d00f5c3334cf9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.571708  774657 certs.go:256] generating profile certs ...
	I0407 12:48:13.571768  774657 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.key
	I0407 12:48:13.571782  774657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt with IP's: []
	I0407 12:48:13.714856  774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt ...
	I0407 12:48:13.714889  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: {Name:mkaeccfcdf311f349b781893ab0111a9d65c2f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.715046  774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.key ...
	I0407 12:48:13.715056  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.key: {Name:mk19c5cf6bbeb319c6c58793b41bf171751bcb5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.715124  774657 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223
	I0407 12:48:13.715140  774657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0407 12:48:13.804982  774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223 ...
	I0407 12:48:13.805015  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223: {Name:mk19af499a320d6f1c26a50a80f1d200d7606753 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.805167  774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223 ...
	I0407 12:48:13.805179  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223: {Name:mk5087366c34e430720a75aac8d18b6e58a3291c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:13.805251  774657 certs.go:381] copying /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt.dbcbf223 -> /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt
	I0407 12:48:13.805319  774657 certs.go:385] copying /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key.dbcbf223 -> /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key
	I0407 12:48:13.805363  774657 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key
	I0407 12:48:13.805380  774657 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt with IP's: []
	I0407 12:48:14.015475  774657 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt ...
	I0407 12:48:14.015504  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt: {Name:mk2606e62891c4f956d85a735a11ca5c61fbfb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:14.015657  774657 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key ...
	I0407 12:48:14.015669  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key: {Name:mk764917d9e3a7c7de356c4db8485bab79055b08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:14.015836  774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca-key.pem (1675 bytes)
	I0407 12:48:14.015877  774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/ca.pem (1078 bytes)
	I0407 12:48:14.015908  774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/cert.pem (1123 bytes)
	I0407 12:48:14.015931  774657 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-766623/.minikube/certs/key.pem (1675 bytes)
	I0407 12:48:14.016584  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:48:14.039027  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0407 12:48:14.059946  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:48:14.081162  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 12:48:14.101962  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0407 12:48:14.122410  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 12:48:14.143076  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:48:14.163802  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 12:48:14.184208  774657 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:48:14.204596  774657 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:48:14.219629  774657 ssh_runner.go:195] Run: openssl version
	I0407 12:48:14.224303  774657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:48:14.232260  774657 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:48:14.235109  774657 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:48 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:48:14.235164  774657 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:48:14.240988  774657 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:48:14.248921  774657 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:48:14.251726  774657 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:48:14.251780  774657 kubeadm.go:392] StartCluster: {Name:addons-662808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-662808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:48:14.251894  774657 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 12:48:14.269703  774657 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:48:14.277838  774657 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:48:14.285630  774657 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0407 12:48:14.285692  774657 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:48:14.293208  774657 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:48:14.293223  774657 kubeadm.go:157] found existing configuration files:
	
	I0407 12:48:14.293261  774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:48:14.300884  774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:48:14.300936  774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:48:14.308203  774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:48:14.315540  774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:48:14.315590  774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:48:14.322564  774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:48:14.329829  774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:48:14.329874  774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:48:14.336995  774657 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:48:14.344443  774657 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:48:14.344487  774657 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:48:14.351513  774657 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0407 12:48:14.387769  774657 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:48:14.387858  774657 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:48:14.407746  774657 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0407 12:48:14.407849  774657 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1078-gcp
	I0407 12:48:14.407905  774657 kubeadm.go:310] OS: Linux
	I0407 12:48:14.407947  774657 kubeadm.go:310] CGROUPS_CPU: enabled
	I0407 12:48:14.407989  774657 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0407 12:48:14.408029  774657 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0407 12:48:14.408071  774657 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0407 12:48:14.408128  774657 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0407 12:48:14.408177  774657 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0407 12:48:14.408215  774657 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0407 12:48:14.408260  774657 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0407 12:48:14.408330  774657 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0407 12:48:14.458678  774657 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:48:14.458822  774657 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:48:14.458940  774657 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:48:14.468870  774657 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:48:14.472046  774657 out.go:235]   - Generating certificates and keys ...
	I0407 12:48:14.472133  774657 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:48:14.472214  774657 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:48:14.594103  774657 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:48:14.852878  774657 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:48:14.915355  774657 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:48:15.009571  774657 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:48:15.338372  774657 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:48:15.338553  774657 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-662808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0407 12:48:15.779584  774657 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:48:15.779738  774657 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-662808 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0407 12:48:16.148172  774657 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:48:16.221492  774657 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:48:16.326257  774657 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:48:16.326322  774657 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:48:16.567973  774657 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:48:16.769462  774657 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:48:16.891601  774657 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:48:17.204961  774657 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:48:17.402326  774657 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:48:17.402836  774657 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:48:17.405249  774657 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:48:17.407107  774657 out.go:235]   - Booting up control plane ...
	I0407 12:48:17.407216  774657 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:48:17.407324  774657 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:48:17.407994  774657 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:48:17.417431  774657 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:48:17.422712  774657 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:48:17.422758  774657 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:48:17.508817  774657 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:48:17.508966  774657 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:48:18.010356  774657 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.641646ms
	I0407 12:48:18.010474  774657 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:48:22.512227  774657 kubeadm.go:310] [api-check] The API server is healthy after 4.501847067s
	I0407 12:48:22.523764  774657 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:48:22.532510  774657 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:48:22.547204  774657 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:48:22.547409  774657 kubeadm.go:310] [mark-control-plane] Marking the node addons-662808 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:48:22.553911  774657 kubeadm.go:310] [bootstrap-token] Using token: l33d72.k4e0y92fadibmgkp
	I0407 12:48:22.555213  774657 out.go:235]   - Configuring RBAC rules ...
	I0407 12:48:22.555345  774657 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:48:22.558861  774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:48:22.564088  774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:48:22.566404  774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:48:22.568810  774657 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:48:22.571020  774657 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:48:22.918327  774657 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:48:23.343029  774657 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:48:23.919633  774657 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:48:23.921664  774657 kubeadm.go:310] 
	I0407 12:48:23.921735  774657 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:48:23.921746  774657 kubeadm.go:310] 
	I0407 12:48:23.921838  774657 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:48:23.921849  774657 kubeadm.go:310] 
	I0407 12:48:23.921869  774657 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:48:23.921931  774657 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:48:23.922002  774657 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:48:23.922011  774657 kubeadm.go:310] 
	I0407 12:48:23.922082  774657 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:48:23.922091  774657 kubeadm.go:310] 
	I0407 12:48:23.922169  774657 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:48:23.922178  774657 kubeadm.go:310] 
	I0407 12:48:23.922234  774657 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:48:23.922358  774657 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:48:23.922469  774657 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:48:23.922479  774657 kubeadm.go:310] 
	I0407 12:48:23.922586  774657 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:48:23.922690  774657 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:48:23.922698  774657 kubeadm.go:310] 
	I0407 12:48:23.922771  774657 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token l33d72.k4e0y92fadibmgkp \
	I0407 12:48:23.922876  774657 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:297cd9f04e1377b467e784b4e94a115886bd24e4049f300320a16578be94ae88 \
	I0407 12:48:23.922895  774657 kubeadm.go:310] 	--control-plane 
	I0407 12:48:23.922899  774657 kubeadm.go:310] 
	I0407 12:48:23.923009  774657 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:48:23.923019  774657 kubeadm.go:310] 
	I0407 12:48:23.923122  774657 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token l33d72.k4e0y92fadibmgkp \
	I0407 12:48:23.923278  774657 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:297cd9f04e1377b467e784b4e94a115886bd24e4049f300320a16578be94ae88 
	I0407 12:48:23.925440  774657 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0407 12:48:23.925722  774657 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1078-gcp\n", err: exit status 1
	I0407 12:48:23.925871  774657 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:48:23.925897  774657 cni.go:84] Creating CNI manager for ""
	I0407 12:48:23.925923  774657 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:48:23.927525  774657 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 12:48:23.928622  774657 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 12:48:23.937477  774657 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 12:48:23.953964  774657 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 12:48:23.954032  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:23.954064  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-662808 minikube.k8s.io/updated_at=2025_04_07T12_48_23_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=addons-662808 minikube.k8s.io/primary=true
	I0407 12:48:23.960981  774657 ops.go:34] apiserver oom_adj: -16
	I0407 12:48:24.044483  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:24.545263  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:25.044824  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:25.545441  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:26.045607  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:26.544950  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:27.045254  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:27.544930  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:28.044632  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:28.545386  774657 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 12:48:28.625308  774657 kubeadm.go:1113] duration metric: took 4.671334465s to wait for elevateKubeSystemPrivileges
	I0407 12:48:28.625346  774657 kubeadm.go:394] duration metric: took 14.373570989s to StartCluster
	I0407 12:48:28.625377  774657 settings.go:142] acquiring lock: {Name:mke7ff97dc38733275c7b62a22ebd9966fea8bd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:28.625506  774657 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:48:28.625978  774657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-766623/kubeconfig: {Name:mkaac003ac5f75e318e3728115e1b4b0fe8249ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:48:28.626187  774657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 12:48:28.626248  774657 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:48:28.626346  774657 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0407 12:48:28.626454  774657 config.go:182] Loaded profile config "addons-662808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:48:28.626482  774657 addons.go:69] Setting yakd=true in profile "addons-662808"
	I0407 12:48:28.626483  774657 addons.go:69] Setting default-storageclass=true in profile "addons-662808"
	I0407 12:48:28.626503  774657 addons.go:238] Setting addon yakd=true in "addons-662808"
	I0407 12:48:28.626509  774657 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-662808"
	I0407 12:48:28.626515  774657 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-662808"
	I0407 12:48:28.626517  774657 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-662808"
	I0407 12:48:28.626534  774657 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-662808"
	I0407 12:48:28.626547  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.626554  774657 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-662808"
	I0407 12:48:28.626563  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.626533  774657 addons.go:69] Setting cloud-spanner=true in profile "addons-662808"
	I0407 12:48:28.626591  774657 addons.go:238] Setting addon cloud-spanner=true in "addons-662808"
	I0407 12:48:28.626610  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.626662  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.626750  774657 addons.go:69] Setting ingress-dns=true in profile "addons-662808"
	I0407 12:48:28.626796  774657 addons.go:238] Setting addon ingress-dns=true in "addons-662808"
	I0407 12:48:28.626841  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.626954  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.627100  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.627105  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.627110  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.627123  774657 addons.go:69] Setting metrics-server=true in profile "addons-662808"
	I0407 12:48:28.627136  774657 addons.go:238] Setting addon metrics-server=true in "addons-662808"
	I0407 12:48:28.627171  774657 addons.go:69] Setting gcp-auth=true in profile "addons-662808"
	I0407 12:48:28.627193  774657 mustload.go:65] Loading cluster: addons-662808
	I0407 12:48:28.627292  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.627353  774657 config.go:182] Loaded profile config "addons-662808": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:48:28.627379  774657 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-662808"
	I0407 12:48:28.627414  774657 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-662808"
	I0407 12:48:28.627609  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.627114  774657 addons.go:69] Setting inspektor-gadget=true in profile "addons-662808"
	I0407 12:48:28.627765  774657 addons.go:238] Setting addon inspektor-gadget=true in "addons-662808"
	I0407 12:48:28.627795  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.627808  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.627928  774657 addons.go:69] Setting ingress=true in profile "addons-662808"
	I0407 12:48:28.627947  774657 addons.go:238] Setting addon ingress=true in "addons-662808"
	I0407 12:48:28.627988  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.628222  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.628257  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.628482  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.628647  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.628801  774657 out.go:177] * Verifying Kubernetes components...
	I0407 12:48:28.630019  774657 addons.go:69] Setting registry=true in profile "addons-662808"
	I0407 12:48:28.630050  774657 addons.go:238] Setting addon registry=true in "addons-662808"
	I0407 12:48:28.630078  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.630310  774657 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:48:28.630500  774657 addons.go:69] Setting storage-provisioner=true in profile "addons-662808"
	I0407 12:48:28.630558  774657 addons.go:238] Setting addon storage-provisioner=true in "addons-662808"
	I0407 12:48:28.630591  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.630765  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.631091  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.633826  774657 addons.go:69] Setting volumesnapshots=true in profile "addons-662808"
	I0407 12:48:28.633856  774657 addons.go:238] Setting addon volumesnapshots=true in "addons-662808"
	I0407 12:48:28.633877  774657 addons.go:69] Setting volcano=true in profile "addons-662808"
	I0407 12:48:28.633896  774657 addons.go:238] Setting addon volcano=true in "addons-662808"
	I0407 12:48:28.633897  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.633936  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.627102  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.634412  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.634420  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.626504  774657 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-662808"
	I0407 12:48:28.635085  774657 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-662808"
	I0407 12:48:28.635134  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.635822  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.666847  774657 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0407 12:48:28.667804  774657 addons.go:238] Setting addon default-storageclass=true in "addons-662808"
	I0407 12:48:28.667915  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.668261  774657 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0407 12:48:28.668464  774657 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:48:28.668486  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0407 12:48:28.668545  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.668856  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.670044  774657 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:48:28.670067  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0407 12:48:28.670140  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.672008  774657 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0407 12:48:28.672576  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.673159  774657 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0407 12:48:28.673180  774657 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0407 12:48:28.673248  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.676074  774657 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0407 12:48:28.677203  774657 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0407 12:48:28.677228  774657 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0407 12:48:28.677309  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.699203  774657 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
	I0407 12:48:28.699204  774657 out.go:177]   - Using image docker.io/registry:2.8.3
	I0407 12:48:28.700557  774657 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0407 12:48:28.700642  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0407 12:48:28.700777  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.702223  774657 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0407 12:48:28.703805  774657 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0407 12:48:28.703832  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0407 12:48:28.703910  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.706850  774657 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0407 12:48:28.708357  774657 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0407 12:48:28.708381  774657 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0407 12:48:28.708491  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.718559  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.729455  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.730403  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.739204  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0407 12:48:28.741231  774657 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
	I0407 12:48:28.746027  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0407 12:48:28.746284  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0407 12:48:28.747197  774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0407 12:48:28.747230  774657 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0407 12:48:28.747306  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.747587  774657 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
	I0407 12:48:28.748719  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0407 12:48:28.748838  774657 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
	I0407 12:48:28.751584  774657 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:48:28.751612  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
	I0407 12:48:28.751675  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.752989  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0407 12:48:28.753980  774657 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0407 12:48:28.754101  774657 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 12:48:28.755112  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0407 12:48:28.755373  774657 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:48:28.755408  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 12:48:28.755505  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.757281  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0407 12:48:28.757373  774657 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:48:28.758278  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0407 12:48:28.759165  774657 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 12:48:28.759205  774657 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 12:48:28.759308  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.761710  774657 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:48:28.762185  774657 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0407 12:48:28.763101  774657 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:48:28.763121  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0407 12:48:28.763172  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.763407  774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0407 12:48:28.763418  774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0407 12:48:28.763486  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.764899  774657 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0407 12:48:28.766758  774657 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:48:28.766780  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0407 12:48:28.766835  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:28.767531  774657 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-662808"
	I0407 12:48:28.767577  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:28.768073  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:28.771200  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.777529  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.788413  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.788578  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.794916  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.799092  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.800676  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.801432  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.807353  774657 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0407 12:48:28.809389  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.810360  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.810527  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.811501  774657 out.go:177]   - Using image docker.io/busybox:stable
	I0407 12:48:28.812549  774657 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:48:28.812564  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0407 12:48:28.812606  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	W0407 12:48:28.825320  774657 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0407 12:48:28.825359  774657 retry.go:31] will retry after 246.271386ms: ssh: handshake failed: EOF
	W0407 12:48:28.825492  774657 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0407 12:48:28.825501  774657 retry.go:31] will retry after 182.340587ms: ssh: handshake failed: EOF
	W0407 12:48:28.825571  774657 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0407 12:48:28.825578  774657 retry.go:31] will retry after 141.596119ms: ssh: handshake failed: EOF
	I0407 12:48:28.848810  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:28.939301  774657 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 12:48:28.939484  774657 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:48:28.951701  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0407 12:48:29.133877  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 12:48:29.223182  774657 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0407 12:48:29.223202  774657 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0407 12:48:29.223215  774657 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0407 12:48:29.223220  774657 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0407 12:48:29.235955  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0407 12:48:29.321669  774657 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0407 12:48:29.321701  774657 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0407 12:48:29.331537  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0407 12:48:29.340720  774657 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0407 12:48:29.340820  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0407 12:48:29.425616  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 12:48:29.438738  774657 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:48:29.438934  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0407 12:48:29.524634  774657 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0407 12:48:29.524735  774657 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0407 12:48:29.527528  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0407 12:48:29.620132  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0407 12:48:29.621559  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0407 12:48:29.635787  774657 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0407 12:48:29.635872  774657 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0407 12:48:29.638999  774657 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0407 12:48:29.639076  774657 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0407 12:48:29.821788  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0407 12:48:29.823746  774657 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:48:29.823819  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0407 12:48:29.934385  774657 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0407 12:48:29.934473  774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0407 12:48:29.941472  774657 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0407 12:48:29.941553  774657 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0407 12:48:30.022260  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0407 12:48:30.022922  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0407 12:48:30.035541  774657 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:48:30.035661  774657 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0407 12:48:30.324433  774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0407 12:48:30.324472  774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0407 12:48:30.330612  774657 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:48:30.330701  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0407 12:48:30.336971  774657 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0407 12:48:30.337064  774657 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0407 12:48:30.624907  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0407 12:48:30.635244  774657 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0407 12:48:30.635281  774657 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0407 12:48:30.831223  774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0407 12:48:30.831309  774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0407 12:48:30.920898  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0407 12:48:31.141606  774657 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.202203959s)
	I0407 12:48:31.141787  774657 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0407 12:48:31.141730  774657 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.20219511s)
	I0407 12:48:31.142782  774657 node_ready.go:35] waiting up to 6m0s for node "addons-662808" to be "Ready" ...
	I0407 12:48:31.222708  774657 node_ready.go:49] node "addons-662808" has status "Ready":"True"
	I0407 12:48:31.222798  774657 node_ready.go:38] duration metric: took 79.940802ms for node "addons-662808" to be "Ready" ...
	I0407 12:48:31.222822  774657 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:48:31.230444  774657 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace to be "Ready" ...
	I0407 12:48:31.436557  774657 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:48:31.436585  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0407 12:48:31.631451  774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0407 12:48:31.631494  774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0407 12:48:31.723584  774657 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-662808" context rescaled to 1 replicas
	I0407 12:48:32.338068  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:48:32.421899  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.470098124s)
	I0407 12:48:32.435674  774657 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0407 12:48:32.435786  774657 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0407 12:48:32.942884  774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0407 12:48:32.942914  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0407 12:48:33.038500  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.904567167s)
	I0407 12:48:33.038590  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.802545106s)
	I0407 12:48:33.237599  774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:33.425428  774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0407 12:48:33.425461  774657 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0407 12:48:34.032752  774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0407 12:48:34.032783  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0407 12:48:34.225272  774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0407 12:48:34.225365  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0407 12:48:34.730262  774657 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:48:34.730354  774657 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0407 12:48:35.121048  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0407 12:48:35.430257  774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:35.531930  774657 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0407 12:48:35.532084  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:35.654178  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:36.544079  774657 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0407 12:48:36.932625  774657 addons.go:238] Setting addon gcp-auth=true in "addons-662808"
	I0407 12:48:36.932693  774657 host.go:66] Checking if "addons-662808" exists ...
	I0407 12:48:36.933244  774657 cli_runner.go:164] Run: docker container inspect addons-662808 --format={{.State.Status}}
	I0407 12:48:36.955903  774657 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0407 12:48:36.955956  774657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-662808
	I0407 12:48:36.972680  774657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/addons-662808/id_rsa Username:docker}
	I0407 12:48:37.737260  774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:40.241200  774657 pod_ready.go:103] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:41.737600  774657 pod_ready.go:93] pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace has status "Ready":"True"
	I0407 12:48:41.737631  774657 pod_ready.go:82] duration metric: took 10.507083363s for pod "amd-gpu-device-plugin-l66rh" in "kube-system" namespace to be "Ready" ...
	I0407 12:48:41.737651  774657 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace to be "Ready" ...
	I0407 12:48:41.741782  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.410207428s)
	I0407 12:48:41.741860  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (12.316161256s)
	I0407 12:48:41.742148  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.214548707s)
	I0407 12:48:41.742357  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (12.122116069s)
	I0407 12:48:41.742499  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (12.120873847s)
	I0407 12:48:41.742515  774657 addons.go:479] Verifying addon ingress=true in "addons-662808"
	I0407 12:48:41.742905  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.921016081s)
	I0407 12:48:41.742971  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.719977403s)
	I0407 12:48:41.742990  774657 addons.go:479] Verifying addon registry=true in "addons-662808"
	I0407 12:48:41.743051  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (11.720690423s)
	I0407 12:48:41.743235  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.822238012s)
	I0407 12:48:41.743315  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.118369842s)
	I0407 12:48:41.743935  774657 addons.go:479] Verifying addon metrics-server=true in "addons-662808"
	I0407 12:48:41.743352  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.405175195s)
	W0407 12:48:41.743986  774657 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:48:41.744005  774657 retry.go:31] will retry after 343.042144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0407 12:48:41.744950  774657 out.go:177] * Verifying registry addon...
	I0407 12:48:41.744987  774657 out.go:177] * Verifying ingress addon...
	I0407 12:48:41.745784  774657 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-662808 service yakd-dashboard -n yakd-dashboard
	
	I0407 12:48:41.748794  774657 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0407 12:48:41.749969  774657 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0407 12:48:41.841743  774657 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0407 12:48:41.843394  774657 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0407 12:48:41.843422  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:41.843593  774657 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0407 12:48:41.843608  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:42.087614  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0407 12:48:42.322570  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:42.423002  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:42.824705  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:42.825041  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:42.829958  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.708779578s)
	I0407 12:48:42.830035  774657 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-662808"
	I0407 12:48:42.830249  774657 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.874312621s)
	I0407 12:48:42.831387  774657 out.go:177] * Verifying csi-hostpath-driver addon...
	I0407 12:48:42.831475  774657 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0407 12:48:42.833086  774657 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0407 12:48:42.833426  774657 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0407 12:48:42.834745  774657 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0407 12:48:42.834771  774657 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0407 12:48:42.849546  774657 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0407 12:48:42.849571  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:42.937424  774657 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0407 12:48:42.937454  774657 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0407 12:48:43.028457  774657 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:48:43.028564  774657 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0407 12:48:43.128873  774657 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0407 12:48:43.323172  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:43.323219  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:43.338255  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:43.744647  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:43.821756  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:43.821950  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:43.837149  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:44.252968  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:44.253317  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:44.338169  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:44.652098  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.56442326s)
	I0407 12:48:44.652193  774657 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.523276069s)
	I0407 12:48:44.653201  774657 addons.go:479] Verifying addon gcp-auth=true in "addons-662808"
	I0407 12:48:44.655840  774657 out.go:177] * Verifying gcp-auth addon...
	I0407 12:48:44.657640  774657 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0407 12:48:44.722370  774657 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:48:44.823171  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:44.823203  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:44.836648  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:45.252258  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:45.252865  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:45.337725  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:45.752010  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:45.754343  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:45.837903  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:46.243554  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:46.252201  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:46.252843  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:46.337346  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:46.752367  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:46.752456  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:46.837133  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:47.252201  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:47.252550  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:47.336956  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:47.752087  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:47.752828  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:47.837318  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:48.251990  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:48.252674  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:48.337025  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:48.742794  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:48.751937  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:48.752091  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:48.837032  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:49.252377  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:49.252415  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:49.337650  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:49.752104  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:49.752842  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:49.837533  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:50.252574  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:50.252581  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:50.336525  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:50.743523  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:50.751894  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:50.752620  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:50.837065  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:51.251589  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:51.253601  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:51.337171  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:51.751518  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:51.753095  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:51.837877  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:52.252138  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:52.252638  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:52.337456  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:52.751721  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:52.752293  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:52.837574  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:53.243059  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:53.251848  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:53.252263  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:53.337808  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:53.751635  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:53.752523  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:53.838057  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:54.251756  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:54.252514  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:54.340507  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:54.751776  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:54.752769  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:54.836979  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:55.243393  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:55.251881  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:55.252778  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:55.336984  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:55.751888  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:55.752617  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:55.836592  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:56.251458  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:56.253073  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:56.337388  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:56.751843  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:56.752595  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:56.836930  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:57.243689  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:57.252411  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:57.252702  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:57.336702  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:57.806618  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:57.806767  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:57.837104  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:58.251786  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:58.252571  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:58.351561  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:58.752094  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:58.752884  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:58.837670  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:59.252538  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:59.252946  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:59.336736  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:48:59.743288  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:48:59.761515  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:48:59.761544  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:48:59.862687  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:00.251682  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:00.252650  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:00.337140  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:00.751253  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:00.752194  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:00.837536  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:01.251590  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:01.252436  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:01.338006  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:01.743579  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:49:01.752304  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:01.752946  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:01.837182  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:02.262809  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:02.262907  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:02.363250  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:02.752094  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:02.752608  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:02.836980  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:03.251641  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:03.252553  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:03.336739  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:03.743802  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:49:03.751341  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:03.752182  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:03.837256  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:04.251557  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:04.253087  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:04.337219  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:04.751990  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:04.752908  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0407 12:49:04.837150  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:05.285834  774657 kapi.go:107] duration metric: took 23.53585631s to wait for kubernetes.io/minikube-addons=registry ...
	I0407 12:49:05.286004  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:05.363212  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:05.744239  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:49:05.751805  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:05.836744  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:06.252063  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:06.337490  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:06.751870  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:06.837016  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:07.251715  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:07.337460  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:07.745603  774657 pod_ready.go:103] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"False"
	I0407 12:49:07.751972  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:07.837402  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:08.251800  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:08.337105  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:08.743698  774657 pod_ready.go:93] pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace has status "Ready":"True"
	I0407 12:49:08.743729  774657 pod_ready.go:82] duration metric: took 27.006069221s for pod "coredns-668d6bf9bc-2kx5j" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.743744  774657 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.748097  774657 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-75t2w" not found
	I0407 12:49:08.748130  774657 pod_ready.go:82] duration metric: took 4.377554ms for pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace to be "Ready" ...
	E0407 12:49:08.748145  774657 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-75t2w" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-75t2w" not found
	I0407 12:49:08.748155  774657 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.751285  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:08.752354  774657 pod_ready.go:93] pod "etcd-addons-662808" in "kube-system" namespace has status "Ready":"True"
	I0407 12:49:08.752372  774657 pod_ready.go:82] duration metric: took 4.205493ms for pod "etcd-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.752384  774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.756482  774657 pod_ready.go:93] pod "kube-apiserver-addons-662808" in "kube-system" namespace has status "Ready":"True"
	I0407 12:49:08.756503  774657 pod_ready.go:82] duration metric: took 4.111433ms for pod "kube-apiserver-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.756516  774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.760374  774657 pod_ready.go:93] pod "kube-controller-manager-addons-662808" in "kube-system" namespace has status "Ready":"True"
	I0407 12:49:08.760391  774657 pod_ready.go:82] duration metric: took 3.867645ms for pod "kube-controller-manager-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.760401  774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cgdfz" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.837859  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:08.941453  774657 pod_ready.go:93] pod "kube-proxy-cgdfz" in "kube-system" namespace has status "Ready":"True"
	I0407 12:49:08.941482  774657 pod_ready.go:82] duration metric: took 181.073388ms for pod "kube-proxy-cgdfz" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:08.941495  774657 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:09.252581  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:09.337435  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:09.340616  774657 pod_ready.go:93] pod "kube-scheduler-addons-662808" in "kube-system" namespace has status "Ready":"True"
	I0407 12:49:09.340645  774657 pod_ready.go:82] duration metric: took 399.138574ms for pod "kube-scheduler-addons-662808" in "kube-system" namespace to be "Ready" ...
	I0407 12:49:09.340657  774657 pod_ready.go:39] duration metric: took 38.117808805s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 12:49:09.340687  774657 api_server.go:52] waiting for apiserver process to appear ...
	I0407 12:49:09.340749  774657 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 12:49:09.359683  774657 api_server.go:72] duration metric: took 40.733392159s to wait for apiserver process to appear ...
	I0407 12:49:09.359712  774657 api_server.go:88] waiting for apiserver healthz status ...
	I0407 12:49:09.359736  774657 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0407 12:49:09.363612  774657 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0407 12:49:09.364678  774657 api_server.go:141] control plane version: v1.32.2
	I0407 12:49:09.364705  774657 api_server.go:131] duration metric: took 4.98495ms to wait for apiserver health ...
	I0407 12:49:09.364716  774657 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 12:49:09.543246  774657 system_pods.go:59] 18 kube-system pods found
	I0407 12:49:09.543301  774657 system_pods.go:61] "amd-gpu-device-plugin-l66rh" [a5fb26c5-73e6-4735-a212-e1b9c91e7d5c] Running
	I0407 12:49:09.543312  774657 system_pods.go:61] "coredns-668d6bf9bc-2kx5j" [7871d918-36bc-48ba-988e-2f65e075c4b5] Running
	I0407 12:49:09.543325  774657 system_pods.go:61] "csi-hostpath-attacher-0" [a9c263af-a860-4403-a74d-39a5679c372e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:49:09.543337  774657 system_pods.go:61] "csi-hostpath-resizer-0" [63ba8f75-61ff-4844-aef3-769cc7389f24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:49:09.543349  774657 system_pods.go:61] "csi-hostpathplugin-5w4kl" [4c65ef1f-5f22-4b45-be30-ece32afb0e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:49:09.543357  774657 system_pods.go:61] "etcd-addons-662808" [37a14464-f055-4cd0-aab2-2e8c975e9f8e] Running
	I0407 12:49:09.543361  774657 system_pods.go:61] "kube-apiserver-addons-662808" [53b37cc3-f3de-4773-bf32-e79021718953] Running
	I0407 12:49:09.543365  774657 system_pods.go:61] "kube-controller-manager-addons-662808" [60ca4f96-9f7a-4e87-8c59-98b0face6e5e] Running
	I0407 12:49:09.543373  774657 system_pods.go:61] "kube-ingress-dns-minikube" [cf9995a2-94ee-45b7-9333-083c10ffac79] Running
	I0407 12:49:09.543376  774657 system_pods.go:61] "kube-proxy-cgdfz" [3e69be31-74a4-41e6-bde2-4615805d9512] Running
	I0407 12:49:09.543379  774657 system_pods.go:61] "kube-scheduler-addons-662808" [393769aa-3d0f-46dd-870e-932a02635cae] Running
	I0407 12:49:09.543383  774657 system_pods.go:61] "metrics-server-7fbb699795-5bqmp" [d86b42f2-cba5-4d53-8277-99e8dc49f20f] Running
	I0407 12:49:09.543387  774657 system_pods.go:61] "nvidia-device-plugin-daemonset-rv6cl" [37f5078d-e3a7-43d5-a718-db741b45b741] Running
	I0407 12:49:09.543390  774657 system_pods.go:61] "registry-6c88467877-g6r5h" [b30d0273-c82f-46a6-a761-fd905b1d3783] Running
	I0407 12:49:09.543395  774657 system_pods.go:61] "registry-proxy-vqvmc" [9829d87d-bb8e-4c3d-b885-03deb72b4409] Running
	I0407 12:49:09.543405  774657 system_pods.go:61] "snapshot-controller-68b874b76f-cv7cx" [8979a27d-2c7b-45cb-a9ef-57da407ff64f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:49:09.543416  774657 system_pods.go:61] "snapshot-controller-68b874b76f-g9ln7" [194e86dc-ac8e-4292-a7e9-e9f3ee215c9f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:49:09.543446  774657 system_pods.go:61] "storage-provisioner" [28fcbb16-5c69-498d-9882-b4a8c2ed606f] Running
	I0407 12:49:09.543460  774657 system_pods.go:74] duration metric: took 178.736638ms to wait for pod list to return data ...
	I0407 12:49:09.543473  774657 default_sa.go:34] waiting for default service account to be created ...
	I0407 12:49:09.741969  774657 default_sa.go:45] found service account: "default"
	I0407 12:49:09.742000  774657 default_sa.go:55] duration metric: took 198.519241ms for default service account to be created ...
	I0407 12:49:09.742015  774657 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 12:49:09.751754  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:09.837102  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:09.943070  774657 system_pods.go:86] 18 kube-system pods found
	I0407 12:49:09.943110  774657 system_pods.go:89] "amd-gpu-device-plugin-l66rh" [a5fb26c5-73e6-4735-a212-e1b9c91e7d5c] Running
	I0407 12:49:09.943120  774657 system_pods.go:89] "coredns-668d6bf9bc-2kx5j" [7871d918-36bc-48ba-988e-2f65e075c4b5] Running
	I0407 12:49:09.943131  774657 system_pods.go:89] "csi-hostpath-attacher-0" [a9c263af-a860-4403-a74d-39a5679c372e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0407 12:49:09.943139  774657 system_pods.go:89] "csi-hostpath-resizer-0" [63ba8f75-61ff-4844-aef3-769cc7389f24] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0407 12:49:09.943154  774657 system_pods.go:89] "csi-hostpathplugin-5w4kl" [4c65ef1f-5f22-4b45-be30-ece32afb0e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0407 12:49:09.943162  774657 system_pods.go:89] "etcd-addons-662808" [37a14464-f055-4cd0-aab2-2e8c975e9f8e] Running
	I0407 12:49:09.943168  774657 system_pods.go:89] "kube-apiserver-addons-662808" [53b37cc3-f3de-4773-bf32-e79021718953] Running
	I0407 12:49:09.943176  774657 system_pods.go:89] "kube-controller-manager-addons-662808" [60ca4f96-9f7a-4e87-8c59-98b0face6e5e] Running
	I0407 12:49:09.943183  774657 system_pods.go:89] "kube-ingress-dns-minikube" [cf9995a2-94ee-45b7-9333-083c10ffac79] Running
	I0407 12:49:09.943190  774657 system_pods.go:89] "kube-proxy-cgdfz" [3e69be31-74a4-41e6-bde2-4615805d9512] Running
	I0407 12:49:09.943195  774657 system_pods.go:89] "kube-scheduler-addons-662808" [393769aa-3d0f-46dd-870e-932a02635cae] Running
	I0407 12:49:09.943203  774657 system_pods.go:89] "metrics-server-7fbb699795-5bqmp" [d86b42f2-cba5-4d53-8277-99e8dc49f20f] Running
	I0407 12:49:09.943208  774657 system_pods.go:89] "nvidia-device-plugin-daemonset-rv6cl" [37f5078d-e3a7-43d5-a718-db741b45b741] Running
	I0407 12:49:09.943216  774657 system_pods.go:89] "registry-6c88467877-g6r5h" [b30d0273-c82f-46a6-a761-fd905b1d3783] Running
	I0407 12:49:09.943221  774657 system_pods.go:89] "registry-proxy-vqvmc" [9829d87d-bb8e-4c3d-b885-03deb72b4409] Running
	I0407 12:49:09.943228  774657 system_pods.go:89] "snapshot-controller-68b874b76f-cv7cx" [8979a27d-2c7b-45cb-a9ef-57da407ff64f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:49:09.943238  774657 system_pods.go:89] "snapshot-controller-68b874b76f-g9ln7" [194e86dc-ac8e-4292-a7e9-e9f3ee215c9f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0407 12:49:09.943245  774657 system_pods.go:89] "storage-provisioner" [28fcbb16-5c69-498d-9882-b4a8c2ed606f] Running
	I0407 12:49:09.943265  774657 system_pods.go:126] duration metric: took 201.241394ms to wait for k8s-apps to be running ...
	I0407 12:49:09.943278  774657 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 12:49:09.943332  774657 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 12:49:09.957589  774657 system_svc.go:56] duration metric: took 14.301888ms WaitForService to wait for kubelet
	I0407 12:49:09.957626  774657 kubeadm.go:582] duration metric: took 41.331341917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:49:09.957649  774657 node_conditions.go:102] verifying NodePressure condition ...
	I0407 12:49:10.142195  774657 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0407 12:49:10.142232  774657 node_conditions.go:123] node cpu capacity is 8
	I0407 12:49:10.142257  774657 node_conditions.go:105] duration metric: took 184.601963ms to run NodePressure ...
	I0407 12:49:10.142274  774657 start.go:241] waiting for startup goroutines ...
	I0407 12:49:10.251805  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:10.337317  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:10.836426  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:10.838319  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:11.252328  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:11.337331  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:11.752587  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:11.837560  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:12.252521  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:12.337769  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:12.751847  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:12.836848  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:13.252001  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:13.337448  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:13.762614  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:13.863313  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:14.262294  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:14.337618  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:14.752649  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:14.837854  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:15.321423  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:15.422912  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:15.752197  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:15.837658  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:16.252874  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:16.336998  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:16.751880  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:16.837277  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:17.252436  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:17.337473  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:17.751589  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:17.837696  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:18.252018  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:18.337152  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:18.752450  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:18.837959  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:19.252528  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:19.337699  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:19.752271  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:19.837282  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:20.252511  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:20.338117  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:20.751876  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:20.837221  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:21.252417  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:21.337571  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:21.751914  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:21.848088  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:22.252905  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:22.337025  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:22.761893  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:22.862451  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:23.253121  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:23.337397  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:23.752481  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:23.837743  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:24.262361  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:24.337610  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:24.762452  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:24.863557  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:25.252282  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:25.337916  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:25.761505  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:25.861828  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:26.253805  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:26.354191  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:26.752652  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:26.837610  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:27.252601  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:27.338102  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:27.762166  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:27.837210  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:28.252238  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:28.337628  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0407 12:49:28.762917  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:28.836794  774657 kapi.go:107] duration metric: took 46.003363526s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0407 12:49:29.252517  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:29.752604  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:30.252015  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:30.751676  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:31.252261  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:31.762567  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:32.252369  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:32.752333  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:33.252523  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:33.752283  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:34.251909  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:34.752333  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:35.252570  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:35.762247  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:36.252317  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:36.751967  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:37.252405  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:37.752636  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:38.252234  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:38.752194  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:39.252652  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:39.752192  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:40.251849  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:40.752268  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:41.252433  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:41.821432  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:42.251942  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:42.751723  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:43.261945  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:43.751837  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:44.252652  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:44.752745  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:45.252022  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:45.752724  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:46.252901  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:46.752124  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:47.252473  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:47.789485  774657 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0407 12:49:48.251650  774657 kapi.go:107] duration metric: took 1m6.502850494s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0407 12:50:06.661919  774657 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0407 12:50:06.661948  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:07.161442  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:07.660889  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:08.161582  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:08.661411  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:09.160459  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:09.660399  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:10.160933  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:10.662236  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:11.160384  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:11.660884  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:12.161350  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:12.660769  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:13.160837  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:13.661048  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:14.161562  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:14.661202  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:15.160256  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:15.661200  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:16.160767  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:16.661428  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:17.160702  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:17.661145  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:18.161059  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:18.661353  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:19.161189  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:19.661236  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:20.160864  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:20.661743  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:21.160985  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:21.661870  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:22.161555  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:22.661251  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:23.160464  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:23.660820  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:24.161229  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:24.660379  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:25.160668  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:25.661174  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:26.160789  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:26.661587  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:27.161366  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:27.661492  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:28.161600  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:28.660961  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:29.161324  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:29.660635  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:30.161399  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:30.661563  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:31.160391  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:31.660840  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:32.161195  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:32.660944  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:33.160530  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:33.660530  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:34.161067  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:34.661758  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:35.160941  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:35.661576  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:36.160800  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:36.661497  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:37.160508  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:37.660606  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:38.161132  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:38.660633  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:39.160138  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:39.660396  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:40.160972  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:40.662981  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:41.161306  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:41.660435  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:42.160915  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:42.661901  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:43.161107  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:43.660353  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:44.160754  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:44.661315  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:45.160310  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:45.661862  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:46.161299  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:46.661148  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:47.160377  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:47.662488  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:48.161194  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:48.661732  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:49.160966  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:49.661261  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:50.161566  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:50.661288  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:51.160794  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:51.661542  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:52.161170  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:52.660886  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:53.161257  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:53.660554  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:54.161052  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:54.661530  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:55.161050  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:55.661779  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:56.161675  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:56.661349  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:57.160996  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:57.661219  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:58.160888  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:58.661028  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:59.161373  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:50:59.660512  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:00.161046  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:00.662237  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:01.160526  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:01.660686  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:02.161020  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:02.661919  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:03.161309  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:03.660462  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:04.161033  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:04.661662  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:05.160847  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:05.661936  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:06.161570  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:06.660921  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:07.161407  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:07.660641  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:08.161305  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:08.660457  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:09.160530  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:09.660427  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:10.160690  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:10.661763  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:11.161254  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:11.661747  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:12.161895  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:12.661117  774657 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0407 12:51:13.176866  774657 kapi.go:107] duration metric: took 2m28.519220815s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0407 12:51:13.178364  774657 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-662808 cluster.
	I0407 12:51:13.179475  774657 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0407 12:51:13.180587  774657 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0407 12:51:13.181736  774657 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, amd-gpu-device-plugin, volcano, cloud-spanner, nvidia-device-plugin, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0407 12:51:13.182763  774657 addons.go:514] duration metric: took 2m44.556414011s for enable addons: enabled=[ingress-dns storage-provisioner amd-gpu-device-plugin volcano cloud-spanner nvidia-device-plugin inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0407 12:51:13.182817  774657 start.go:246] waiting for cluster config update ...
	I0407 12:51:13.182841  774657 start.go:255] writing updated cluster config ...
	I0407 12:51:13.183549  774657 ssh_runner.go:195] Run: rm -f paused
	I0407 12:51:13.236912  774657 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 12:51:13.238540  774657 out.go:177] * Done! kubectl is now configured to use "addons-662808" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.332822653Z" level=info msg="ignoring event" container=86370591ab32ac046883a9c1ed4c71092f44f387ff0ba3c031030ebca25cd94f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.345587296Z" level=info msg="ignoring event" container=cd80a0757ba85ba7b309a52568ae874f5a28ed00add028bcefe449245af54ef3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.345640461Z" level=info msg="ignoring event" container=662cfd47ac3fb6ed3edd5ce9800afc9c08ea202df6bd19abe6a80d88eb0d079d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.350910260Z" level=info msg="ignoring event" container=76f9d40c6f8e8f7786d9b2f1218c79aeb61455d0f4d2bbe462367b4f3eef5b31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.425858525Z" level=info msg="ignoring event" container=998ede9839c047880f9d8cafba5aca9ff72ba878c02e5748dc30485d2dc35de9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.629788278Z" level=info msg="ignoring event" container=cc9ed7e51f74421296d40390cc7a6667df88f250c56d5d4c10d03a76a65cf670 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.653363700Z" level=info msg="ignoring event" container=e20a677a788537b1fb642a36089e395042045741eb6d130d1809dc26744cb8e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:46 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:52:46Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpath-resizer-0_kube-system\": unexpected command output nsenter: cannot open /proc/6029/ns/net: No such file or directory\n with error: exit status 1"
	Apr 07 12:52:46 addons-662808 dockerd[1440]: time="2025-04-07T12:52:46.731600659Z" level=info msg="ignoring event" container=00d88fae1d6a7f2b10882def30670ac5391f57c77edd28865bd4ed74ce4ecff9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.351915768Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.354061040Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.482128979Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:52:50 addons-662808 dockerd[1440]: time="2025-04-07T12:52:50.484284706Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:53:17 addons-662808 dockerd[1440]: time="2025-04-07T12:53:17.358469673Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:53:17 addons-662808 dockerd[1440]: time="2025-04-07T12:53:17.360076627Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:53:20 addons-662808 dockerd[1440]: time="2025-04-07T12:53:20.408326435Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:53:20 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:53:20Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: 1.0: Pulling from kicbase/echo-server"
	Apr 07 12:53:25 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:53:25Z" level=error msg="error getting RW layer size for container ID 'c282c016006910b62bda0d3a5c34b1ad120a2cd6dfd5198ad8ea1f1a3ac5f8a8': Error response from daemon: No such container: c282c016006910b62bda0d3a5c34b1ad120a2cd6dfd5198ad8ea1f1a3ac5f8a8"
	Apr 07 12:53:25 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:53:25Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c282c016006910b62bda0d3a5c34b1ad120a2cd6dfd5198ad8ea1f1a3ac5f8a8'"
	Apr 07 12:53:59 addons-662808 dockerd[1440]: time="2025-04-07T12:53:59.349578742Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:53:59 addons-662808 dockerd[1440]: time="2025-04-07T12:53:59.351294002Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:54:12 addons-662808 dockerd[1440]: time="2025-04-07T12:54:12.363934612Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:54:12 addons-662808 dockerd[1440]: time="2025-04-07T12:54:12.365801935Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:55:28 addons-662808 dockerd[1440]: time="2025-04-07T12:55:28.452081720Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:55:28 addons-662808 cri-dockerd[1743]: time="2025-04-07T12:55:28Z" level=info msg="Stop pulling image busybox:stable: stable: Pulling from library/busybox"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                    CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	91847463ff11d       nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591                            3 minutes ago       Running             nginx                     0                   53e4d3f317689       nginx
	432e12c3d133a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e      3 minutes ago       Running             busybox                   0                   aa8a092293397       busybox
	ce4342824436c       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246   6 minutes ago       Running             local-path-provisioner    0                   809daafacf09a       local-path-provisioner-76f89f99b5-zz5vr
	8453eafd0d8b9       6e38f40d628db                                                                                            7 minutes ago       Running             storage-provisioner       0                   1514be81e6c23       storage-provisioner
	896abc0e03ea1       c69fa2e9cbf5f                                                                                            7 minutes ago       Running             coredns                   0                   cb2349542b905       coredns-668d6bf9bc-2kx5j
	6dec7c27faa4d       f1332858868e1                                                                                            7 minutes ago       Running             kube-proxy                0                   74eaa8b89e907       kube-proxy-cgdfz
	920768df661bd       a9e7e6b294baf                                                                                            7 minutes ago       Running             etcd                      0                   a37ca73db4ae0       etcd-addons-662808
	853e5bbacf4c9       85b7a174738ba                                                                                            7 minutes ago       Running             kube-apiserver            0                   1923f5b1ef964       kube-apiserver-addons-662808
	18ae4fce94233       d8e673e7c9983                                                                                            7 minutes ago       Running             kube-scheduler            0                   3e55a61a516f6       kube-scheduler-addons-662808
	b0deb3bc6ed6e       b6a454c5a800d                                                                                            7 minutes ago       Running             kube-controller-manager   0                   cf289b2d28a3d       kube-controller-manager-addons-662808
	
	
	==> coredns [896abc0e03ea] <==
	[INFO] 10.244.0.23:40289 - 26455 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006207103s
	[INFO] 10.244.0.23:52764 - 22657 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00557717s
	[INFO] 10.244.0.23:34767 - 40349 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005987241s
	[INFO] 10.244.0.23:52230 - 5890 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004366156s
	[INFO] 10.244.0.23:40289 - 4528 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004304377s
	[INFO] 10.244.0.23:54802 - 10025 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00481132s
	[INFO] 10.244.0.23:40716 - 62638 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006114344s
	[INFO] 10.244.0.23:59305 - 18490 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004570626s
	[INFO] 10.244.0.23:43430 - 16331 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00731684s
	[INFO] 10.244.0.23:34767 - 22823 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005740799s
	[INFO] 10.244.0.23:59305 - 53411 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005722537s
	[INFO] 10.244.0.23:54802 - 6185 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004851224s
	[INFO] 10.244.0.23:52230 - 19158 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005498315s
	[INFO] 10.244.0.23:43430 - 44683 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005207469s
	[INFO] 10.244.0.23:40716 - 55430 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005081415s
	[INFO] 10.244.0.23:52764 - 18902 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00613158s
	[INFO] 10.244.0.23:43430 - 25725 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.23:59305 - 47013 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000119123s
	[INFO] 10.244.0.23:52230 - 54068 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072255s
	[INFO] 10.244.0.23:34767 - 22490 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053071s
	[INFO] 10.244.0.23:52764 - 19646 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049646s
	[INFO] 10.244.0.23:40289 - 37858 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.041211359s
	[INFO] 10.244.0.23:40716 - 39367 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000207832s
	[INFO] 10.244.0.23:54802 - 25291 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000163712s
	[INFO] 10.244.0.23:40289 - 22018 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000128136s
	
	
	==> describe nodes <==
	Name:               addons-662808
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-662808
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=addons-662808
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_48_23_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-662808
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:48:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-662808
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 12:55:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 12:52:59 +0000   Mon, 07 Apr 2025 12:48:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 12:52:59 +0000   Mon, 07 Apr 2025 12:48:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 12:52:59 +0000   Mon, 07 Apr 2025 12:48:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 12:52:59 +0000   Mon, 07 Apr 2025 12:48:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-662808
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 507234e98f564487b1aa1c00ea17aac2
	  System UUID:                55ac8795-6631-4183-b1ad-654ae3cdc752
	  Boot ID:                    1751ef18-988c-47e7-9c05-4bbf13b6e72b
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	  default                     hello-world-app-7d9564db4-rps6j            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	  default                     test-local-path                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 coredns-668d6bf9bc-2kx5j                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m10s
	  kube-system                 etcd-addons-662808                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m15s
	  kube-system                 kube-apiserver-addons-662808               250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-controller-manager-addons-662808      200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-proxy-cgdfz                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-scheduler-addons-662808               100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  local-path-storage          local-path-provisioner-76f89f99b5-zz5vr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m5s   kube-proxy       
	  Normal   Starting                 7m15s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m15s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  7m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m15s  kubelet          Node addons-662808 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m15s  kubelet          Node addons-662808 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m15s  kubelet          Node addons-662808 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m11s  node-controller  Node addons-662808 event: Registered Node addons-662808 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 65 55 c9 33 59 08 06
	[  +0.141473] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 bf 1f c7 ae 9e 08 06
	[ +21.609237] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d5 9a 49 e1 56 08 06
	[  +0.000651] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 e3 ca 1b 7f d5 08 06
	[Apr 7 12:50] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 72 30 e1 9f c6 d3 08 06
	[  +0.097803] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 81 9a df 00 56 08 06
	[Apr 7 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 1f 16 4b 47 75 08 06
	[  +0.000518] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[Apr 7 12:52] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 4c 24 4e 2e 75 08 06
	[  +0.000501] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[  +0.000635] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 4f 27 69 51 39 08 06
	[ +12.201481] IPv4: martian source 10.244.0.33 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d5 9a 49 e1 56 08 06
	[  +0.317597] IPv4: martian source 10.244.0.23 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	
	
	==> etcd [920768df661b] <==
	{"level":"info","ts":"2025-04-07T12:48:19.041762Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:48:19.041791Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:48:19.629637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-04-07T12:48:19.629683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-04-07T12:48:19.629698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-04-07T12:48:19.629761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-04-07T12:48:19.629774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-04-07T12:48:19.629784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-04-07T12:48:19.629793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-04-07T12:48:19.630728Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-662808 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:48:19.630724Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:48:19.630737Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:48:19.630749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:48:19.631086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:48:19.631113Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:48:19.631265Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:48:19.631343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:48:19.631370Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:48:19.631752Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:48:19.631800Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:48:19.632483Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T12:48:19.632485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-04-07T12:48:32.220218Z","caller":"traceutil/trace.go:171","msg":"trace[2071743628] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"100.382692ms","start":"2025-04-07T12:48:32.119814Z","end":"2025-04-07T12:48:32.220197Z","steps":["trace[2071743628] 'process raft request'  (duration: 21.015821ms)","trace[2071743628] 'compare'  (duration: 78.799843ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T12:48:32.220453Z","caller":"traceutil/trace.go:171","msg":"trace[20732756] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"100.569047ms","start":"2025-04-07T12:48:32.119875Z","end":"2025-04-07T12:48:32.220444Z","steps":["trace[20732756] 'process raft request'  (duration: 99.859087ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T12:48:32.220540Z","caller":"traceutil/trace.go:171","msg":"trace[862675044] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"100.364164ms","start":"2025-04-07T12:48:32.120170Z","end":"2025-04-07T12:48:32.220534Z","steps":["trace[862675044] 'process raft request'  (duration: 99.618527ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:55:38 up 20:38,  0 users,  load average: 0.83, 0.68, 0.53
	Linux addons-662808 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [853e5bbacf4c] <==
	W0407 12:51:44.625398       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0407 12:51:45.066978       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0407 12:52:01.283207       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41846: use of closed network connection
	E0407 12:52:01.461883       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:41870: use of closed network connection
	I0407 12:52:10.943381       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.105.33"}
	I0407 12:52:22.654344       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0407 12:52:24.978052       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0407 12:52:25.174810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.149.170"}
	I0407 12:52:27.263039       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0407 12:52:28.430617       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0407 12:52:34.687885       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.103.112"}
	I0407 12:52:45.114334       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:52:45.114390       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:52:45.127046       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:52:45.127100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:52:45.128585       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:52:45.128643       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:52:45.141853       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:52:45.141917       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0407 12:52:45.252291       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0407 12:52:45.252336       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0407 12:52:46.129357       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0407 12:52:46.252830       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0407 12:52:46.264675       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0407 12:53:08.037599       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [b0deb3bc6ed6] <==
	E0407 12:55:18.940353       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:55:20.744770       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:55:20.747841       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0407 12:55:20.748744       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:55:20.748792       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:55:24.607684       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:55:24.608767       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0407 12:55:24.609581       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:55:24.609611       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:55:26.221673       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:55:26.222698       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0407 12:55:26.223670       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:55:26.223711       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:55:30.039758       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:55:30.040722       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="bus.volcano.sh/v1alpha1, Resource=commands"
	W0407 12:55:30.041612       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:55:30.041654       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:55:35.086184       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:55:35.087109       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=queues"
	W0407 12:55:35.088030       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:55:35.088066       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0407 12:55:35.331332       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0407 12:55:35.332328       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=podgroups"
	W0407 12:55:35.333301       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0407 12:55:35.333335       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [6dec7c27faa4] <==
	I0407 12:48:31.825911       1 server_linux.go:66] "Using iptables proxy"
	I0407 12:48:32.432388       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0407 12:48:32.432498       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:48:32.721647       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0407 12:48:32.721717       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:48:32.729212       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:48:32.737479       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:48:32.737521       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:48:32.744191       1 config.go:199] "Starting service config controller"
	I0407 12:48:32.824043       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:48:32.821324       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:48:32.824089       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:48:32.821803       1 config.go:329] "Starting node config controller"
	I0407 12:48:32.824100       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:48:32.924577       1 shared_informer.go:320] Caches are synced for node config
	I0407 12:48:32.924618       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:48:32.924631       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18ae4fce9423] <==
	E0407 12:48:20.942096       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0407 12:48:20.942098       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:20.942127       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:48:20.942143       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:20.942432       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0407 12:48:20.942520       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 12:48:20.942568       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0407 12:48:20.942517       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:20.943345       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 12:48:20.943377       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:20.943610       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:48:20.943639       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:21.807656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 12:48:21.807698       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:21.847602       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 12:48:21.847664       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 12:48:21.923314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0407 12:48:21.923357       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:21.966324       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 12:48:21.966388       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:22.003874       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 12:48:22.003912       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:48:22.019121       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 12:48:22.019164       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0407 12:48:23.938749       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.351841    2632 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.351919    2632 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.352097    2632 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ffsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(d80409e4-1900-4a8f-9c48-4e8e81479f9a): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 12:53:59 addons-662808 kubelet[2632]: E0407 12:53:59.353239    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	Apr 07 12:54:10 addons-662808 kubelet[2632]: E0407 12:54:10.232947    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.366324    2632 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.366385    2632 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.366488    2632 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:hello-world-app,Image:docker.io/kicbase/echo-server:1.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qzp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start fai
led in pod hello-world-app-7d9564db4-rps6j_default(5e3c9230-c6e8-4e0b-babf-9ce5ce906846): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 12:54:12 addons-662808 kubelet[2632]: E0407 12:54:12.367673    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
	Apr 07 12:54:21 addons-662808 kubelet[2632]: E0407 12:54:21.233719    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	Apr 07 12:54:25 addons-662808 kubelet[2632]: E0407 12:54:25.233151    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
	Apr 07 12:54:28 addons-662808 kubelet[2632]: I0407 12:54:28.230947    2632 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Apr 07 12:54:33 addons-662808 kubelet[2632]: E0407 12:54:33.232761    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	Apr 07 12:54:38 addons-662808 kubelet[2632]: E0407 12:54:38.233270    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
	Apr 07 12:54:45 addons-662808 kubelet[2632]: E0407 12:54:45.232926    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	Apr 07 12:54:51 addons-662808 kubelet[2632]: E0407 12:54:51.232820    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
	Apr 07 12:54:59 addons-662808 kubelet[2632]: E0407 12:54:59.233468    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	Apr 07 12:55:05 addons-662808 kubelet[2632]: E0407 12:55:05.232994    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
	Apr 07 12:55:13 addons-662808 kubelet[2632]: E0407 12:55:13.233506    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	Apr 07 12:55:16 addons-662808 kubelet[2632]: E0407 12:55:16.233292    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
	Apr 07 12:55:27 addons-662808 kubelet[2632]: E0407 12:55:27.233521    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-7d9564db4-rps6j" podUID="5e3c9230-c6e8-4e0b-babf-9ce5ce906846"
	Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.454603    2632 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.454657    2632 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.454778    2632 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ffsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(d80409e4-1900-4a8f-9c48-4e8e81479f9a): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 12:55:28 addons-662808 kubelet[2632]: E0407 12:55:28.455958    2632 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="d80409e4-1900-4a8f-9c48-4e8e81479f9a"
	
	
	==> storage-provisioner [8453eafd0d8b] <==
	I0407 12:48:37.129932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:48:37.226583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:48:37.226661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:48:37.322710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:48:37.323158       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-662808_6facb6dc-a95d-4e49-9932-41d1bf4bf1b9!
	I0407 12:48:37.324247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21766113-f71c-47e8-a214-f06f7579e823", APIVersion:"v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-662808_6facb6dc-a95d-4e49-9932-41d1bf4bf1b9 became leader
	I0407 12:48:37.424128       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-662808_6facb6dc-a95d-4e49-9932-41d1bf4bf1b9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-662808 -n addons-662808
helpers_test.go:261: (dbg) Run:  kubectl --context addons-662808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-rps6j test-local-path
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-662808 describe pod hello-world-app-7d9564db4-rps6j test-local-path
helpers_test.go:282: (dbg) kubectl --context addons-662808 describe pod hello-world-app-7d9564db4-rps6j test-local-path:

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-rps6j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-662808/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:52:34 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.35
	IPs:
	  IP:           10.244.0.35
	Controlled By:  ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9qzp7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9qzp7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m5s                  default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-rps6j to addons-662808
	  Warning  Failed     2m19s (x2 over 3m4s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    87s (x4 over 3m4s)    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     87s (x4 over 3m4s)    kubelet            Error: ErrImagePull
	  Warning  Failed     87s (x2 over 2m49s)   kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    12s (x12 over 3m3s)   kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     12s (x12 over 3m3s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-662808/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:52:36 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.36
	IPs:
	  IP:  10.244.0.36
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5ffsn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-5ffsn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/test-local-path to addons-662808
	  Warning  Failed     100s (x4 over 3m2s)  kubelet            Failed to pull image "busybox:stable": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    26s (x11 over 3m1s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     26s (x11 over 3m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    11s (x5 over 3m2s)   kubelet            Pulling image "busybox:stable"
	  Warning  Failed     11s (x5 over 3m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     11s                  kubelet            Failed to pull image "busybox:stable": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/LocalPath FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.92457104s)
--- FAIL: TestAddons/parallel/LocalPath (229.37s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (187.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2d010af8-2222-42f7-9ee3-0c999bc260bd] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004018064s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-880043 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-880043 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-880043 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-880043 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d70390cc-cf47-4135-b3ff-0cdfac11e46d] Pending
helpers_test.go:344: "sp-pod" [d70390cc-cf47-4135-b3ff-0cdfac11e46d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/04/07 12:59:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-880043 -n functional-880043
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-04-07 13:02:49.443124399 +0000 UTC m=+919.188647700
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-880043 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-880043 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-880043/192.168.49.2
Start Time:       Mon, 07 Apr 2025 12:59:49 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.15
IPs:
IP:  10.244.0.15
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wpd89 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-wpd89:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-880043
Normal   Pulling    86s (x4 over 2m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     86s (x4 over 2m57s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     86s (x4 over 2m57s)  kubelet            Error: ErrImagePull
Normal   BackOff    8s (x11 over 2m57s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     8s (x11 over 2m57s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-880043 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-880043 logs sp-pod -n default: exit status 1 (70.462564ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-880043 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-880043
helpers_test.go:235: (dbg) docker inspect functional-880043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770",
	        "Created": "2025-04-07T12:57:10.465216803Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 806964,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-07T12:57:10.501713297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:037bd1b5a0f63899880a74b20d0e40b693fd199ade4ed9b883be5ed5726d15a6",
	        "ResolvConfPath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/hostname",
	        "HostsPath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/hosts",
	        "LogPath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770-json.log",
	        "Name": "/functional-880043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-880043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-880043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770",
	                "LowerDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5-init/diff:/var/lib/docker/overlay2/4ad95e7f4a49b487176ca9dc3e3437ef3df8ea71a4a72c4a666a7db5084d5e6d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-880043",
	                "Source": "/var/lib/docker/volumes/functional-880043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-880043",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-880043",
	                "name.minikube.sigs.k8s.io": "functional-880043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd7d8cca14b89b95b6245df81c0c6bf37a17fa38d021aaffbf8b6931298e62ee",
	            "SandboxKey": "/var/run/docker/netns/cd7d8cca14b8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-880043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:7f:f5:fd:15:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e0d9bd542f32a6694070e026ee540cdd6c07c3f067cdf330da0a207557cfafe",
	                    "EndpointID": "5a3c8cb27d8f9e8a8a3df75f994934e6ef470d8d414dcb4e25cdb940eb5b1c64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-880043",
	                        "259ca6716a6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-880043 -n functional-880043
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-880043 logs -n 25: (1.006409635s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-880043 ssh -n                                              | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | functional-880043 sudo cat                                            |                   |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                              |                   |         |         |                     |                     |
	| tunnel         | functional-880043 tunnel                                              | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| cp             | functional-880043 cp                                                  | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | testdata/cp-test.txt                                                  |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                       |                   |         |         |                     |                     |
	| start          | -p functional-880043                                                  | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | --dry-run --memory                                                    |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                               |                   |         |         |                     |                     |
	|                | --driver=docker                                                       |                   |         |         |                     |                     |
	|                | --container-runtime=docker                                            |                   |         |         |                     |                     |
	| ssh            | functional-880043 ssh -n                                              | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | functional-880043 sudo cat                                            |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                       |                   |         |         |                     |                     |
	| start          | -p functional-880043                                                  | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | --dry-run --alsologtostderr                                           |                   |         |         |                     |                     |
	|                | -v=1 --driver=docker                                                  |                   |         |         |                     |                     |
	|                | --container-runtime=docker                                            |                   |         |         |                     |                     |
	| image          | functional-880043 image load --daemon                                 | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | kicbase/echo-server:functional-880043                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043 image load --daemon                                 | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | kicbase/echo-server:functional-880043                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043 image save kicbase/echo-server:functional-880043    | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image rm                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | kicbase/echo-server:functional-880043                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043 image load                                          | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                    | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | -p functional-880043                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| update-context | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format short                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format yaml                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| ssh            | functional-880043 ssh pgrep                                           | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | buildkitd                                                             |                   |         |         |                     |                     |
	| image          | functional-880043 image build -t                                      | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | localhost/my-image:functional-880043                                  |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format json                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format table                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:59:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:59:43.613838  827609 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:59:43.613943  827609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:43.613955  827609 out.go:358] Setting ErrFile to fd 2...
	I0407 12:59:43.613962  827609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:43.614185  827609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 12:59:43.614788  827609 out.go:352] Setting JSON to false
	I0407 12:59:43.616036  827609 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":74533,"bootTime":1743956251,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:59:43.616153  827609 start.go:139] virtualization: kvm guest
	I0407 12:59:43.618202  827609 out.go:177] * [functional-880043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:59:43.619661  827609 notify.go:220] Checking for updates...
	I0407 12:59:43.619680  827609 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:59:43.621068  827609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:59:43.622621  827609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:59:43.623917  827609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	I0407 12:59:43.625502  827609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:59:43.626937  827609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:59:43.629135  827609 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:59:43.629705  827609 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:59:43.654648  827609 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:59:43.654785  827609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:59:43.707402  827609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:59:43.697843057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:59:43.707549  827609 docker.go:318] overlay module found
	I0407 12:59:43.709623  827609 out.go:177] * Using the docker driver based on existing profile
	I0407 12:59:43.710989  827609 start.go:297] selected driver: docker
	I0407 12:59:43.711015  827609 start.go:901] validating driver "docker" against &{Name:functional-880043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-880043 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:43.711107  827609 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:59:43.711214  827609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:59:43.768164  827609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:59:43.759130522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:59:43.768816  827609 cni.go:84] Creating CNI manager for ""
	I0407 12:59:43.768895  827609 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:59:43.768955  827609 start.go:340] cluster config:
	{Name:functional-880043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-880043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:43.770678  827609 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Apr 07 12:59:52 functional-880043 dockerd[7912]: time="2025-04-07T12:59:52.174350079Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:59:52 functional-880043 dockerd[7912]: time="2025-04-07T12:59:52.182843587Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:59:52 functional-880043 dockerd[7912]: time="2025-04-07T12:59:52.403414273Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:59:52 functional-880043 dockerd[7912]: time="2025-04-07T12:59:52.427633400Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:59:57 functional-880043 dockerd[7912]: 2025/04/07 12:59:57 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Apr 07 12:59:59 functional-880043 dockerd[7912]: time="2025-04-07T12:59:59.775538617Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 12:59:59 functional-880043 dockerd[7912]: time="2025-04-07T12:59:59.777291028Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:04 functional-880043 dockerd[7912]: time="2025-04-07T13:00:04.023027233Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:04 functional-880043 dockerd[7912]: time="2025-04-07T13:00:04.025102474Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:19 functional-880043 dockerd[7912]: time="2025-04-07T13:00:19.761157990Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:19 functional-880043 dockerd[7912]: time="2025-04-07T13:00:19.763185800Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:23 functional-880043 dockerd[7912]: time="2025-04-07T13:00:23.740320331Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:23 functional-880043 dockerd[7912]: time="2025-04-07T13:00:23.742335592Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:28 functional-880043 dockerd[7912]: time="2025-04-07T13:00:28.755672755Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:28 functional-880043 dockerd[7912]: time="2025-04-07T13:00:28.757876313Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:01 functional-880043 dockerd[7912]: time="2025-04-07T13:01:01.755467644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:01 functional-880043 dockerd[7912]: time="2025-04-07T13:01:01.757450480Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:11 functional-880043 dockerd[7912]: time="2025-04-07T13:01:11.742752912Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:11 functional-880043 dockerd[7912]: time="2025-04-07T13:01:11.744706230Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:23 functional-880043 dockerd[7912]: time="2025-04-07T13:01:23.766488156Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:23 functional-880043 dockerd[7912]: time="2025-04-07T13:01:23.767984641Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:23 functional-880043 dockerd[7912]: time="2025-04-07T13:02:23.827616352Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:23 functional-880043 cri-dockerd[8224]: time="2025-04-07T13:02:23Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Apr 07 13:02:42 functional-880043 dockerd[7912]: time="2025-04-07T13:02:42.747836445Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:42 functional-880043 dockerd[7912]: time="2025-04-07T13:02:42.749636221Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	76358b95e354a       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   ecfc3fb716b1a       kubernetes-dashboard-7779f9b69b-7bq6n
	1e3b784a78750       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   3 minutes ago       Running             dashboard-metrics-scraper   0                   2903a302e8721       dashboard-metrics-scraper-5d59dccf9b-wcrbs
	e189b5117e5a7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    3 minutes ago       Exited              mount-munger                0                   07169736165fb       busybox-mount
	0d5398f90ba3a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969     3 minutes ago       Running             echoserver                  0                   e9df4ab06e11e       hello-node-connect-58f9cf68d8-pb5x5
	92262c5378c59       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969     3 minutes ago       Running             echoserver                  0                   936f77817bfa9       hello-node-fcfd88b6f-64qb4
	484a97e967142       f1332858868e1                                                                                          3 minutes ago       Running             kube-proxy                  3                   8bbfe85e940e9       kube-proxy-gbn64
	70ee4aa40401c       c69fa2e9cbf5f                                                                                          3 minutes ago       Running             coredns                     2                   46a37726ae107       coredns-668d6bf9bc-lm7vs
	447c218e669ee       6e38f40d628db                                                                                          3 minutes ago       Running             storage-provisioner         3                   708b4156118f8       storage-provisioner
	40e6660aefde0       a9e7e6b294baf                                                                                          3 minutes ago       Running             etcd                        3                   4a2aea7daf681       etcd-functional-880043
	9c96acba8fe07       85b7a174738ba                                                                                          3 minutes ago       Running             kube-apiserver              0                   b2a092e5055a9       kube-apiserver-functional-880043
	369e0e364d2dc       b6a454c5a800d                                                                                          3 minutes ago       Running             kube-controller-manager     3                   ec196ff086726       kube-controller-manager-functional-880043
	8208c3578a46f       d8e673e7c9983                                                                                          3 minutes ago       Running             kube-scheduler              3                   456f8e4b36374       kube-scheduler-functional-880043
	fc3847647ca6e       b6a454c5a800d                                                                                          3 minutes ago       Exited              kube-controller-manager     2                   1f8a0bf2233a9       kube-controller-manager-functional-880043
	5dffcf5ca8736       f1332858868e1                                                                                          3 minutes ago       Exited              kube-proxy                  2                   a21720d17afe3       kube-proxy-gbn64
	b6e602aa1a76f       a9e7e6b294baf                                                                                          3 minutes ago       Exited              etcd                        2                   a2e8cd0f7de1f       etcd-functional-880043
	64763be05b368       d8e673e7c9983                                                                                          3 minutes ago       Exited              kube-scheduler              2                   b09e58e40b38c       kube-scheduler-functional-880043
	81b8568376d24       6e38f40d628db                                                                                          4 minutes ago       Exited              storage-provisioner         2                   843dc116e0a74       storage-provisioner
	0a142e2456576       c69fa2e9cbf5f                                                                                          4 minutes ago       Exited              coredns                     1                   e58cd935951de       coredns-668d6bf9bc-lm7vs
	
	
	==> coredns [0a142e245657] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46477 - 25864 "HINFO IN 6663384892568457339.3459290480997454336. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014830626s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [70ee4aa40401] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53607 - 19291 "HINFO IN 7338312159466506823.6241587426982328848. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023276333s
	
	
	==> describe nodes <==
	Name:               functional-880043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-880043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=functional-880043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_57_28_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-880043
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:02:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:02:49 +0000   Mon, 07 Apr 2025 12:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:02:49 +0000   Mon, 07 Apr 2025 12:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:02:49 +0000   Mon, 07 Apr 2025 12:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:02:49 +0000   Mon, 07 Apr 2025 12:57:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-880043
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 90d1bdfce53d418c8c347941c6c32b20
	  System UUID:                928416a6-fdbd-43a7-b082-ac5893cb488f
	  Boot ID:                    1751ef18-988c-47e7-9c05-4bbf13b6e72b
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-pb5x5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     hello-node-fcfd88b6f-64qb4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     mysql-58ccfd96bb-qdrzr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     3m17s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-668d6bf9bc-lm7vs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m17s
	  kube-system                 etcd-functional-880043                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m22s
	  kube-system                 kube-apiserver-functional-880043              250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-controller-manager-functional-880043     200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-gbn64                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-scheduler-functional-880043              100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-wcrbs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-7bq6n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m16s                  kube-proxy       
	  Normal   Starting                 3m44s                  kube-proxy       
	  Normal   Starting                 4m21s                  kube-proxy       
	  Normal   Starting                 5m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     5m22s                  kubelet          Node functional-880043 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    5m22s                  kubelet          Node functional-880043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  5m22s                  kubelet          Node functional-880043 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           5m19s                  node-controller  Node functional-880043 event: Registered Node functional-880043 in Controller
	  Normal   NodeNotReady             4m32s                  kubelet          Node functional-880043 status is now: NodeNotReady
	  Normal   RegisteredNode           4m18s                  node-controller  Node functional-880043 event: Registered Node functional-880043 in Controller
	  Normal   Starting                 3m49s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m49s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node functional-880043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node functional-880043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node functional-880043 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m42s                  node-controller  Node functional-880043 event: Registered Node functional-880043 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 81 9a df 00 56 08 06
	[Apr 7 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 1f 16 4b 47 75 08 06
	[  +0.000518] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[Apr 7 12:52] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 4c 24 4e 2e 75 08 06
	[  +0.000501] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[  +0.000635] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 4f 27 69 51 39 08 06
	[ +12.201481] IPv4: martian source 10.244.0.33 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d5 9a 49 e1 56 08 06
	[  +0.317597] IPv4: martian source 10.244.0.23 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[Apr 7 12:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 0f f5 24 36 5e 08 06
	[  +0.002154] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 1b 3a 53 a4 84 08 06
	[Apr 7 12:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 4f 52 db 43 3f 08 06
	[ +39.620843] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 66 8e b4 08 fe 08 06
	[Apr 7 12:59] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 5c 7f 37 4d 33 08 06
	
	
	==> etcd [40e6660aefde] <==
	{"level":"info","ts":"2025-04-07T12:59:02.548033Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-04-07T12:59:02.548163Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:59:02.548179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:59:02.548205Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:59:02.550026Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T12:59:02.550304Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T12:59:02.550352Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T12:59:02.550410Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:59:02.550429Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:59:04.340642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-04-07T12:59:04.340689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-07T12:59:04.340727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-04-07T12:59:04.340743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.340763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.340774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.340793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.342555Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-880043 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:59:04.342567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:59:04.342569Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:59:04.342864Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:59:04.342910Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:59:04.343755Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:59:04.343758Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:59:04.344485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-04-07T12:59:04.344493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [b6e602aa1a76] <==
	{"level":"info","ts":"2025-04-07T12:58:59.349790Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-04-07T12:58:59.526751Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","commit-index":571}
	{"level":"info","ts":"2025-04-07T12:58:59.528134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-07T12:58:59.528229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 3"}
	{"level":"info","ts":"2025-04-07T12:58:59.528246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 3, commit: 571, applied: 0, lastindex: 571, lastterm: 3]"}
	{"level":"warn","ts":"2025-04-07T12:58:59.529639Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-07T12:58:59.534058Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":544}
	{"level":"info","ts":"2025-04-07T12:58:59.539501Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-07T12:58:59.545732Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aec36adc501070cc","timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:58:59.546169Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-04-07T12:58:59.546230Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-07T12:58:59.546794Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:58:59.550064Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-07T12:58:59.550237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:58:59.550288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:58:59.550308Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:58:59.550575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-04-07T12:58:59.550640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-04-07T12:58:59.550731Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:58:59.550769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:58:59.622052Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T12:58:59.622404Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T12:58:59.622448Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T12:58:59.622561Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:58:59.622587Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	
	
	==> kernel <==
	 13:02:50 up 20:45,  0 users,  load average: 0.28, 1.21, 0.94
	Linux functional-880043 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9c96acba8fe0] <==
	I0407 12:59:05.339879       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 12:59:05.345394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 12:59:05.419932       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 12:59:05.420053       1 policy_source.go:240] refreshing policies
	I0407 12:59:05.421678       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0407 12:59:05.421700       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 12:59:05.427703       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 12:59:05.625750       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 12:59:06.231195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0407 12:59:06.637893       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0407 12:59:06.639290       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 12:59:06.783975       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 12:59:06.815737       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 12:59:06.837990       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 12:59:06.843677       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 12:59:08.900042       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 12:59:26.398576       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.243.197"}
	I0407 12:59:30.555783       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 12:59:30.659747       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.208.31"}
	I0407 12:59:30.901555       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.173.156"}
	I0407 12:59:33.526425       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.33.30"}
	I0407 12:59:43.041296       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.49.160"}
	I0407 12:59:46.772948       1 controller.go:615] quota admission added evaluator for: namespaces
	I0407 12:59:47.033015       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.21.53"}
	I0407 12:59:47.051012       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.242.41"}
	
	
	==> kube-controller-manager [369e0e364d2d] <==
	E0407 12:59:46.838406       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:59:46.843612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="2.878135ms"
	E0407 12:59:46.843646       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0407 12:59:46.855740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="11.053333ms"
	I0407 12:59:46.861096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="5.307526ms"
	I0407 12:59:46.861200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="59.454µs"
	I0407 12:59:46.925230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="40.643µs"
	I0407 12:59:46.944968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="25.264855ms"
	I0407 12:59:47.030325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="85.290883ms"
	I0407 12:59:47.030415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="45.769µs"
	I0407 12:59:48.572451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.165735ms"
	I0407 12:59:48.572552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="55.148µs"
	I0407 12:59:48.632185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="69.374µs"
	I0407 12:59:52.636634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="7.108582ms"
	I0407 12:59:52.636738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="53.114µs"
	I0407 13:00:04.630760       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="69.076µs"
	I0407 13:00:06.320511       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-880043"
	I0407 13:00:19.633011       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="87.742µs"
	I0407 13:00:32.632100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="75.362µs"
	I0407 13:00:47.634955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="76.234µs"
	I0407 13:01:14.631972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="89.09µs"
	I0407 13:01:28.631904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="106.967µs"
	I0407 13:02:35.633855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="85.129µs"
	I0407 13:02:49.929389       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-880043"
	I0407 13:02:50.630915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="83.087µs"
	
	
	==> kube-controller-manager [fc3847647ca6] <==
	
	
	==> kube-proxy [484a97e96714] <==
	I0407 12:59:06.252662       1 server_linux.go:66] "Using iptables proxy"
	I0407 12:59:06.425901       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0407 12:59:06.425989       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:59:06.449896       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0407 12:59:06.449964       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:59:06.452383       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:59:06.452851       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:59:06.452890       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:59:06.455955       1 config.go:199] "Starting service config controller"
	I0407 12:59:06.456006       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:59:06.456035       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:59:06.456041       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:59:06.456064       1 config.go:329] "Starting node config controller"
	I0407 12:59:06.456069       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:59:06.556388       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:59:06.556396       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:59:06.556408       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5dffcf5ca873] <==
	I0407 12:58:59.630075       1 server_linux.go:66] "Using iptables proxy"
	
	
	==> kube-scheduler [64763be05b36] <==
	
	
	==> kube-scheduler [8208c3578a46] <==
	I0407 12:59:02.854853       1 serving.go:386] Generated self-signed cert in-memory
	I0407 12:59:05.432310       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 12:59:05.432345       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:59:05.437841       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0407 12:59:05.437850       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0407 12:59:05.437849       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 12:59:05.437903       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0407 12:59:05.437904       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:59:05.437904       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0407 12:59:05.438080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 12:59:05.438116       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 12:59:05.538177       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0407 12:59:05.538199       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:59:05.538919       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Apr 07 13:01:28 functional-880043 kubelet[9782]: E0407 13:01:28.624535    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:01:35 functional-880043 kubelet[9782]: E0407 13:01:35.624373    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:01:36 functional-880043 kubelet[9782]: E0407 13:01:36.622195    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:01:41 functional-880043 kubelet[9782]: E0407 13:01:41.630390    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:01:48 functional-880043 kubelet[9782]: E0407 13:01:48.624442    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:01:50 functional-880043 kubelet[9782]: E0407 13:01:50.622054    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:01:56 functional-880043 kubelet[9782]: E0407 13:01:56.623811    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:02:02 functional-880043 kubelet[9782]: E0407 13:02:02.621907    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:02:03 functional-880043 kubelet[9782]: E0407 13:02:03.624276    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:02:09 functional-880043 kubelet[9782]: E0407 13:02:09.623629    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:02:14 functional-880043 kubelet[9782]: E0407 13:02:14.622092    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:02:17 functional-880043 kubelet[9782]: E0407 13:02:17.624057    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:02:23 functional-880043 kubelet[9782]: E0407 13:02:23.830391    9782 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Apr 07 13:02:23 functional-880043 kubelet[9782]: E0407 13:02:23.830461    9782 kuberuntime_image.go:55] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Apr 07 13:02:23 functional-880043 kubelet[9782]: E0407 13:02:23.830608    9782 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9rhrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-qdrzr_default(4580e356-0fb6-4c7b-9e35-1a2c89a735f8): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 13:02:23 functional-880043 kubelet[9782]: E0407 13:02:23.831840    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:02:26 functional-880043 kubelet[9782]: E0407 13:02:26.622107    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:02:30 functional-880043 kubelet[9782]: E0407 13:02:30.624255    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:02:35 functional-880043 kubelet[9782]: E0407 13:02:35.624166    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:02:41 functional-880043 kubelet[9782]: E0407 13:02:41.621974    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:02:42 functional-880043 kubelet[9782]: E0407 13:02:42.750185    9782 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Apr 07 13:02:42 functional-880043 kubelet[9782]: E0407 13:02:42.750262    9782 kuberuntime_image.go:55] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Apr 07 13:02:42 functional-880043 kubelet[9782]: E0407 13:02:42.750398    9782 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhsp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(1af8e1d8-ba31-4299-ab10-0b4ca8b7c998): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Apr 07 13:02:42 functional-880043 kubelet[9782]: E0407 13:02:42.751580    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:02:50 functional-880043 kubelet[9782]: E0407 13:02:50.623618    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	
	
	==> kubernetes-dashboard [76358b95e354] <==
	2025/04/07 12:59:52 Starting overwatch
	2025/04/07 12:59:52 Using namespace: kubernetes-dashboard
	2025/04/07 12:59:52 Using in-cluster config to connect to apiserver
	2025/04/07 12:59:52 Using secret token for csrf signing
	2025/04/07 12:59:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/07 12:59:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/07 12:59:52 Successful initial request to the apiserver, version: v1.32.2
	2025/04/07 12:59:52 Generating JWE encryption key
	2025/04/07 12:59:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/07 12:59:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/07 12:59:53 Initializing JWE encryption key from synchronized object
	2025/04/07 12:59:53 Creating in-cluster Sidecar client
	2025/04/07 12:59:53 Successful request to sidecar
	2025/04/07 12:59:53 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [447c218e669e] <==
	I0407 12:59:05.966905       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:59:06.021696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:59:06.021833       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:59:23.419925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:59:23.419994       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef8e56d6-4c09-4f4d-8a93-95ca3a55bd16", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-880043_f379bb15-39f4-451c-897a-0bfeff9ba8dc became leader
	I0407 12:59:23.420075       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-880043_f379bb15-39f4-451c-897a-0bfeff9ba8dc!
	I0407 12:59:23.520369       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-880043_f379bb15-39f4-451c-897a-0bfeff9ba8dc!
	I0407 12:59:48.937960       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0407 12:59:48.938188       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bb9c8b56-54cd-4835-a9a2-20d165f175ea", APIVersion:"v1", ResourceVersion:"865", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0407 12:59:48.938077       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    5bdbe58d-4164-4e40-9128-ce068c3989cc 335 0 2025-04-07 12:57:33 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-07 12:57:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea &PersistentVolumeClaim{ObjectMeta:{myclaim  default  bb9c8b56-54cd-4835-a9a2-20d165f175ea 865 0 2025-04-07 12:59:48 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-07 12:59:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-07 12:59:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0407 12:59:48.938580       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea" provisioned
	I0407 12:59:48.938606       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0407 12:59:48.938617       1 volume_store.go:212] Trying to save persistentvolume "pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea"
	I0407 12:59:48.951195       1 volume_store.go:219] persistentvolume "pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea" saved
	I0407 12:59:48.951317       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bb9c8b56-54cd-4835-a9a2-20d165f175ea", APIVersion:"v1", ResourceVersion:"865", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea
	
	
	==> storage-provisioner [81b8568376d2] <==
	I0407 12:58:43.949893       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:58:43.957744       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:58:43.957897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-880043 -n functional-880043
helpers_test.go:261: (dbg) Run:  kubectl --context functional-880043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-qdrzr nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-880043 describe pod busybox-mount mysql-58ccfd96bb-qdrzr nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-880043 describe pod busybox-mount mysql-58ccfd96bb-qdrzr nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://e189b5117e5a73300156d5f7933b1c40f66419aae8d3aec64ad93418158641a4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 07 Apr 2025 12:59:35 +0000
	      Finished:     Mon, 07 Apr 2025 12:59:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pc95r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pc95r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  3m19s  default-scheduler  Successfully assigned default/busybox-mount to functional-880043
	  Normal  Pulling    3m19s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m16s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.206s (2.542s including waiting). Image size: 4403845 bytes.
	  Normal  Created    3m16s  kubelet            Created container: mount-munger
	  Normal  Started    3m16s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-qdrzr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:33 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9rhrk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9rhrk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m17s                 default-scheduler  Successfully assigned default/mysql-58ccfd96bb-qdrzr to functional-880043
	  Warning  Failed     110s (x3 over 2m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    28s (x5 over 3m17s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     28s (x2 over 3m16s)   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     28s (x5 over 3m16s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x11 over 3m16s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     1s (x11 over 3m16s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:43 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhsp7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lhsp7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m8s                 default-scheduler  Successfully assigned default/nginx-svc to functional-880043
	  Warning  Failed     3m8s                 kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    21s (x12 over 3m7s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     21s (x12 over 3m7s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x5 over 3m8s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     9s (x5 over 3m8s)    kubelet            Error: ErrImagePull
	  Warning  Failed     9s (x4 over 2m52s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:49 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wpd89 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wpd89:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-880043
	  Normal   Pulling    88s (x4 over 3m1s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     88s (x4 over 2m59s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     88s (x4 over 2m59s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x11 over 2m59s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     10s (x11 over 2m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (187.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-880043 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-qdrzr" [4580e356-0fb6-4c7b-9e35-1a2c89a735f8] Pending
helpers_test.go:344: "mysql-58ccfd96bb-qdrzr" [4580e356-0fb6-4c7b-9e35-1a2c89a735f8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-880043 -n functional-880043
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-04-07 13:09:33.879597354 +0000 UTC m=+1323.625120664
functional_test.go:1816: (dbg) Run:  kubectl --context functional-880043 describe po mysql-58ccfd96bb-qdrzr -n default
functional_test.go:1816: (dbg) kubectl --context functional-880043 describe po mysql-58ccfd96bb-qdrzr -n default:
Name:             mysql-58ccfd96bb-qdrzr
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-880043/192.168.49.2
Start Time:       Mon, 07 Apr 2025 12:59:33 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9rhrk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9rhrk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-58ccfd96bb-qdrzr to functional-880043
Warning  Failed     8m32s (x3 over 9m41s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m10s (x2 over 9m58s)   kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m10s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x19 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x20 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Normal   Pulling    4m27s (x6 over 9m59s)   kubelet            Pulling image "docker.io/mysql:5.7"
functional_test.go:1816: (dbg) Run:  kubectl --context functional-880043 logs mysql-58ccfd96bb-qdrzr -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-880043 logs mysql-58ccfd96bb-qdrzr -n default: exit status 1 (74.116913ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-qdrzr" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-880043 logs mysql-58ccfd96bb-qdrzr -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-880043
helpers_test.go:235: (dbg) docker inspect functional-880043:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770",
	        "Created": "2025-04-07T12:57:10.465216803Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 806964,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-04-07T12:57:10.501713297Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:037bd1b5a0f63899880a74b20d0e40b693fd199ade4ed9b883be5ed5726d15a6",
	        "ResolvConfPath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/hostname",
	        "HostsPath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/hosts",
	        "LogPath": "/var/lib/docker/containers/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770/259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770-json.log",
	        "Name": "/functional-880043",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-880043:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-880043",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "259ca6716a6d5dd93d0c437760d815ef4d40d03344d27c8fe1fa33ed4e826770",
	                "LowerDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5-init/diff:/var/lib/docker/overlay2/4ad95e7f4a49b487176ca9dc3e3437ef3df8ea71a4a72c4a666a7db5084d5e6d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/94a4a8db1feca14367a58e8fa7706ada4faa390501404848784efd28fa8d29e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-880043",
	                "Source": "/var/lib/docker/volumes/functional-880043/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-880043",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-880043",
	                "name.minikube.sigs.k8s.io": "functional-880043",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd7d8cca14b89b95b6245df81c0c6bf37a17fa38d021aaffbf8b6931298e62ee",
	            "SandboxKey": "/var/run/docker/netns/cd7d8cca14b8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-880043": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:7f:f5:fd:15:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e0d9bd542f32a6694070e026ee540cdd6c07c3f067cdf330da0a207557cfafe",
	                    "EndpointID": "5a3c8cb27d8f9e8a8a3df75f994934e6ef470d8d414dcb4e25cdb940eb5b1c64",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-880043",
	                        "259ca6716a6d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-880043 -n functional-880043
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-880043 logs -n 25: (1.013026081s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-880043 ssh -n                                              | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | functional-880043 sudo cat                                            |                   |         |         |                     |                     |
	|                | /home/docker/cp-test.txt                                              |                   |         |         |                     |                     |
	| tunnel         | functional-880043 tunnel                                              | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| cp             | functional-880043 cp                                                  | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | testdata/cp-test.txt                                                  |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                       |                   |         |         |                     |                     |
	| start          | -p functional-880043                                                  | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | --dry-run --memory                                                    |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                               |                   |         |         |                     |                     |
	|                | --driver=docker                                                       |                   |         |         |                     |                     |
	|                | --container-runtime=docker                                            |                   |         |         |                     |                     |
	| ssh            | functional-880043 ssh -n                                              | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | functional-880043 sudo cat                                            |                   |         |         |                     |                     |
	|                | /tmp/does/not/exist/cp-test.txt                                       |                   |         |         |                     |                     |
	| start          | -p functional-880043                                                  | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | --dry-run --alsologtostderr                                           |                   |         |         |                     |                     |
	|                | -v=1 --driver=docker                                                  |                   |         |         |                     |                     |
	|                | --container-runtime=docker                                            |                   |         |         |                     |                     |
	| image          | functional-880043 image load --daemon                                 | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | kicbase/echo-server:functional-880043                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043 image load --daemon                                 | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | kicbase/echo-server:functional-880043                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043 image save kicbase/echo-server:functional-880043    | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image rm                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | kicbase/echo-server:functional-880043                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043 image load                                          | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                    | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | -p functional-880043                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                |                   |         |         |                     |                     |
	| update-context | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| update-context | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | update-context                                                        |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                |                   |         |         |                     |                     |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format short                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format yaml                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| ssh            | functional-880043 ssh pgrep                                           | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC |                     |
	|                | buildkitd                                                             |                   |         |         |                     |                     |
	| image          | functional-880043 image build -t                                      | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | localhost/my-image:functional-880043                                  |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                      |                   |         |         |                     |                     |
	| image          | functional-880043 image ls                                            | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format json                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	| image          | functional-880043                                                     | functional-880043 | jenkins | v1.35.0 | 07 Apr 25 12:59 UTC | 07 Apr 25 12:59 UTC |
	|                | image ls --format table                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                     |                   |         |         |                     |                     |
	|----------------|-----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:59:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:59:43.613838  827609 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:59:43.613943  827609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:43.613955  827609 out.go:358] Setting ErrFile to fd 2...
	I0407 12:59:43.613962  827609 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:43.614185  827609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 12:59:43.614788  827609 out.go:352] Setting JSON to false
	I0407 12:59:43.616036  827609 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":74533,"bootTime":1743956251,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:59:43.616153  827609 start.go:139] virtualization: kvm guest
	I0407 12:59:43.618202  827609 out.go:177] * [functional-880043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:59:43.619661  827609 notify.go:220] Checking for updates...
	I0407 12:59:43.619680  827609 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:59:43.621068  827609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:59:43.622621  827609 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:59:43.623917  827609 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	I0407 12:59:43.625502  827609 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:59:43.626937  827609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:59:43.629135  827609 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:59:43.629705  827609 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:59:43.654648  827609 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:59:43.654785  827609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:59:43.707402  827609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:59:43.697843057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:59:43.707549  827609 docker.go:318] overlay module found
	I0407 12:59:43.709623  827609 out.go:177] * Using the docker driver based on existing profile
	I0407 12:59:43.710989  827609 start.go:297] selected driver: docker
	I0407 12:59:43.711015  827609 start.go:901] validating driver "docker" against &{Name:functional-880043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-880043 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:43.711107  827609 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:59:43.711214  827609 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:59:43.768164  827609 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:59:43.759130522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:59:43.768816  827609 cni.go:84] Creating CNI manager for ""
	I0407 12:59:43.768895  827609 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:59:43.768955  827609 start.go:340] cluster config:
	{Name:functional-880043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-880043 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:43.770678  827609 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Apr 07 13:00:04 functional-880043 dockerd[7912]: time="2025-04-07T13:00:04.025102474Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:19 functional-880043 dockerd[7912]: time="2025-04-07T13:00:19.761157990Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:19 functional-880043 dockerd[7912]: time="2025-04-07T13:00:19.763185800Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:23 functional-880043 dockerd[7912]: time="2025-04-07T13:00:23.740320331Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:23 functional-880043 dockerd[7912]: time="2025-04-07T13:00:23.742335592Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:28 functional-880043 dockerd[7912]: time="2025-04-07T13:00:28.755672755Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:00:28 functional-880043 dockerd[7912]: time="2025-04-07T13:00:28.757876313Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:01 functional-880043 dockerd[7912]: time="2025-04-07T13:01:01.755467644Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:01 functional-880043 dockerd[7912]: time="2025-04-07T13:01:01.757450480Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:11 functional-880043 dockerd[7912]: time="2025-04-07T13:01:11.742752912Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:11 functional-880043 dockerd[7912]: time="2025-04-07T13:01:11.744706230Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:23 functional-880043 dockerd[7912]: time="2025-04-07T13:01:23.766488156Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:01:23 functional-880043 dockerd[7912]: time="2025-04-07T13:01:23.767984641Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:23 functional-880043 dockerd[7912]: time="2025-04-07T13:02:23.827616352Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:23 functional-880043 cri-dockerd[8224]: time="2025-04-07T13:02:23Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Apr 07 13:02:42 functional-880043 dockerd[7912]: time="2025-04-07T13:02:42.747836445Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:42 functional-880043 dockerd[7912]: time="2025-04-07T13:02:42.749636221Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:55 functional-880043 dockerd[7912]: time="2025-04-07T13:02:55.763581390Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:02:55 functional-880043 dockerd[7912]: time="2025-04-07T13:02:55.766024851Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:05:06 functional-880043 dockerd[7912]: time="2025-04-07T13:05:06.824283595Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:05:06 functional-880043 cri-dockerd[8224]: time="2025-04-07T13:05:06Z" level=info msg="Stop pulling image docker.io/mysql:5.7: 5.7: Pulling from library/mysql"
	Apr 07 13:05:30 functional-880043 dockerd[7912]: time="2025-04-07T13:05:30.750866530Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:05:30 functional-880043 dockerd[7912]: time="2025-04-07T13:05:30.752492281Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:05:42 functional-880043 dockerd[7912]: time="2025-04-07T13:05:42.746671767Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Apr 07 13:05:42 functional-880043 dockerd[7912]: time="2025-04-07T13:05:42.748678009Z" level=error msg="Handler for POST /v1.43/images/create returned error: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	76358b95e354a       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   ecfc3fb716b1a       kubernetes-dashboard-7779f9b69b-7bq6n
	1e3b784a78750       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   9 minutes ago       Running             dashboard-metrics-scraper   0                   2903a302e8721       dashboard-metrics-scraper-5d59dccf9b-wcrbs
	e189b5117e5a7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    9 minutes ago       Exited              mount-munger                0                   07169736165fb       busybox-mount
	0d5398f90ba3a       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969     10 minutes ago      Running             echoserver                  0                   e9df4ab06e11e       hello-node-connect-58f9cf68d8-pb5x5
	92262c5378c59       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969     10 minutes ago      Running             echoserver                  0                   936f77817bfa9       hello-node-fcfd88b6f-64qb4
	484a97e967142       f1332858868e1                                                                                          10 minutes ago      Running             kube-proxy                  3                   8bbfe85e940e9       kube-proxy-gbn64
	70ee4aa40401c       c69fa2e9cbf5f                                                                                          10 minutes ago      Running             coredns                     2                   46a37726ae107       coredns-668d6bf9bc-lm7vs
	447c218e669ee       6e38f40d628db                                                                                          10 minutes ago      Running             storage-provisioner         3                   708b4156118f8       storage-provisioner
	40e6660aefde0       a9e7e6b294baf                                                                                          10 minutes ago      Running             etcd                        3                   4a2aea7daf681       etcd-functional-880043
	9c96acba8fe07       85b7a174738ba                                                                                          10 minutes ago      Running             kube-apiserver              0                   b2a092e5055a9       kube-apiserver-functional-880043
	369e0e364d2dc       b6a454c5a800d                                                                                          10 minutes ago      Running             kube-controller-manager     3                   ec196ff086726       kube-controller-manager-functional-880043
	8208c3578a46f       d8e673e7c9983                                                                                          10 minutes ago      Running             kube-scheduler              3                   456f8e4b36374       kube-scheduler-functional-880043
	fc3847647ca6e       b6a454c5a800d                                                                                          10 minutes ago      Exited              kube-controller-manager     2                   1f8a0bf2233a9       kube-controller-manager-functional-880043
	5dffcf5ca8736       f1332858868e1                                                                                          10 minutes ago      Exited              kube-proxy                  2                   a21720d17afe3       kube-proxy-gbn64
	b6e602aa1a76f       a9e7e6b294baf                                                                                          10 minutes ago      Exited              etcd                        2                   a2e8cd0f7de1f       etcd-functional-880043
	64763be05b368       d8e673e7c9983                                                                                          10 minutes ago      Exited              kube-scheduler              2                   b09e58e40b38c       kube-scheduler-functional-880043
	81b8568376d24       6e38f40d628db                                                                                          10 minutes ago      Exited              storage-provisioner         2                   843dc116e0a74       storage-provisioner
	0a142e2456576       c69fa2e9cbf5f                                                                                          11 minutes ago      Exited              coredns                     1                   e58cd935951de       coredns-668d6bf9bc-lm7vs
	
	
	==> coredns [0a142e245657] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:46477 - 25864 "HINFO IN 6663384892568457339.3459290480997454336. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014830626s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [70ee4aa40401] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:53607 - 19291 "HINFO IN 7338312159466506823.6241587426982328848. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023276333s
	
	
	==> describe nodes <==
	Name:               functional-880043
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-880043
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=functional-880043
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T12_57_28_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-880043
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:09:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:07:54 +0000   Mon, 07 Apr 2025 12:57:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:07:54 +0000   Mon, 07 Apr 2025 12:57:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:07:54 +0000   Mon, 07 Apr 2025 12:57:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:07:54 +0000   Mon, 07 Apr 2025 12:57:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-880043
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 90d1bdfce53d418c8c347941c6c32b20
	  System UUID:                928416a6-fdbd-43a7-b082-ac5893cb488f
	  Boot ID:                    1751ef18-988c-47e7-9c05-4bbf13b6e72b
	  Kernel Version:             5.15.0-1078-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.0.4
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-pb5x5           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-64qb4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-qdrzr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  kube-system                 coredns-668d6bf9bc-lm7vs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-880043                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-880043              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-880043     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-gbn64                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-880043              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-wcrbs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-7bq6n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-880043 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-880043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-880043 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node functional-880043 event: Registered Node functional-880043 in Controller
	  Normal   NodeNotReady             11m                kubelet          Node functional-880043 status is now: NodeNotReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-880043 event: Registered Node functional-880043 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-880043 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-880043 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-880043 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-880043 event: Registered Node functional-880043 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 81 9a df 00 56 08 06
	[Apr 7 12:51] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 1f 16 4b 47 75 08 06
	[  +0.000518] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[Apr 7 12:52] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 4c 24 4e 2e 75 08 06
	[  +0.000501] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[  +0.000635] IPv4: martian source 10.244.0.32 from 10.244.0.9, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d6 4f 27 69 51 39 08 06
	[ +12.201481] IPv4: martian source 10.244.0.33 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 d5 9a 49 e1 56 08 06
	[  +0.317597] IPv4: martian source 10.244.0.23 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e a6 82 33 b5 99 08 06
	[Apr 7 12:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 0f f5 24 36 5e 08 06
	[  +0.002154] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff e6 1b 3a 53 a4 84 08 06
	[Apr 7 12:58] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9e 4f 52 db 43 3f 08 06
	[ +39.620843] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 26 66 8e b4 08 fe 08 06
	[Apr 7 12:59] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 1e 5c 7f 37 4d 33 08 06
	
	
	==> etcd [40e6660aefde] <==
	{"level":"info","ts":"2025-04-07T12:59:02.548205Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:59:02.550026Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T12:59:02.550304Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T12:59:02.550352Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T12:59:02.550410Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:59:02.550429Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:59:04.340642Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-04-07T12:59:04.340689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-04-07T12:59:04.340727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-04-07T12:59:04.340743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.340763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.340774Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.340793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-04-07T12:59:04.342555Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-880043 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:59:04.342567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:59:04.342569Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T12:59:04.342864Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T12:59:04.342910Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T12:59:04.343755Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:59:04.343758Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:59:04.344485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-04-07T12:59:04.344493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T13:09:04.359862Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1244}
	{"level":"info","ts":"2025-04-07T13:09:04.372721Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1244,"took":"12.5179ms","hash":1642737791,"current-db-size-bytes":4009984,"current-db-size":"4.0 MB","current-db-size-in-use-bytes":1814528,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-04-07T13:09:04.372779Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1642737791,"revision":1244,"compact-revision":-1}
	
	
	==> etcd [b6e602aa1a76] <==
	{"level":"info","ts":"2025-04-07T12:58:59.349790Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-04-07T12:58:59.526751Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","commit-index":571}
	{"level":"info","ts":"2025-04-07T12:58:59.528134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-07T12:58:59.528229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 3"}
	{"level":"info","ts":"2025-04-07T12:58:59.528246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 3, commit: 571, applied: 0, lastindex: 571, lastterm: 3]"}
	{"level":"warn","ts":"2025-04-07T12:58:59.529639Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-07T12:58:59.534058Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":544}
	{"level":"info","ts":"2025-04-07T12:58:59.539501Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-07T12:58:59.545732Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aec36adc501070cc","timeout":"7s"}
	{"level":"info","ts":"2025-04-07T12:58:59.546169Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-04-07T12:58:59.546230Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-07T12:58:59.546794Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T12:58:59.550064Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-07T12:58:59.550237Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:58:59.550288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:58:59.550308Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T12:58:59.550575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-04-07T12:58:59.550640Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-04-07T12:58:59.550731Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:58:59.550769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T12:58:59.622052Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T12:58:59.622404Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T12:58:59.622448Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T12:58:59.622561Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-04-07T12:58:59.622587Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	
	
	==> kernel <==
	 13:09:35 up 20:52,  0 users,  load average: 0.22, 0.38, 0.63
	Linux functional-880043 5.15.0-1078-gcp #87~20.04.1-Ubuntu SMP Mon Feb 24 10:23:16 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [9c96acba8fe0] <==
	I0407 12:59:05.339879       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 12:59:05.345394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 12:59:05.419932       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 12:59:05.420053       1 policy_source.go:240] refreshing policies
	I0407 12:59:05.421678       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0407 12:59:05.421700       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 12:59:05.427703       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 12:59:05.625750       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 12:59:06.231195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0407 12:59:06.637893       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0407 12:59:06.639290       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 12:59:06.783975       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 12:59:06.815737       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 12:59:06.837990       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 12:59:06.843677       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 12:59:08.900042       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 12:59:26.398576       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.243.197"}
	I0407 12:59:30.555783       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 12:59:30.659747       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.208.31"}
	I0407 12:59:30.901555       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.173.156"}
	I0407 12:59:33.526425       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.33.30"}
	I0407 12:59:43.041296       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.49.160"}
	I0407 12:59:46.772948       1 controller.go:615] quota admission added evaluator for: namespaces
	I0407 12:59:47.033015       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.21.53"}
	I0407 12:59:47.051012       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.242.41"}
	
	
	==> kube-controller-manager [369e0e364d2d] <==
	I0407 12:59:46.855740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="11.053333ms"
	I0407 12:59:46.861096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="5.307526ms"
	I0407 12:59:46.861200       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="59.454µs"
	I0407 12:59:46.925230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="40.643µs"
	I0407 12:59:46.944968       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="25.264855ms"
	I0407 12:59:47.030325       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="85.290883ms"
	I0407 12:59:47.030415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="45.769µs"
	I0407 12:59:48.572451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="7.165735ms"
	I0407 12:59:48.572552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="55.148µs"
	I0407 12:59:48.632185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="69.374µs"
	I0407 12:59:52.636634       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="7.108582ms"
	I0407 12:59:52.636738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="53.114µs"
	I0407 13:00:04.630760       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="69.076µs"
	I0407 13:00:06.320511       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-880043"
	I0407 13:00:19.633011       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="87.742µs"
	I0407 13:00:32.632100       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="75.362µs"
	I0407 13:00:47.634955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="76.234µs"
	I0407 13:01:14.631972       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="89.09µs"
	I0407 13:01:28.631904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="106.967µs"
	I0407 13:02:35.633855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="85.129µs"
	I0407 13:02:49.929389       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-880043"
	I0407 13:02:50.630915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="83.087µs"
	I0407 13:05:19.634206       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="97.182µs"
	I0407 13:05:34.632528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="91.079µs"
	I0407 13:07:54.795340       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-880043"
	
	
	==> kube-controller-manager [fc3847647ca6] <==
	
	
	==> kube-proxy [484a97e96714] <==
	I0407 12:59:06.252662       1 server_linux.go:66] "Using iptables proxy"
	I0407 12:59:06.425901       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0407 12:59:06.425989       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 12:59:06.449896       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0407 12:59:06.449964       1 server_linux.go:170] "Using iptables Proxier"
	I0407 12:59:06.452383       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 12:59:06.452851       1 server.go:497] "Version info" version="v1.32.2"
	I0407 12:59:06.452890       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:59:06.455955       1 config.go:199] "Starting service config controller"
	I0407 12:59:06.456006       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 12:59:06.456035       1 config.go:105] "Starting endpoint slice config controller"
	I0407 12:59:06.456041       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 12:59:06.456064       1 config.go:329] "Starting node config controller"
	I0407 12:59:06.456069       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 12:59:06.556388       1 shared_informer.go:320] Caches are synced for service config
	I0407 12:59:06.556396       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 12:59:06.556408       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [5dffcf5ca873] <==
	I0407 12:58:59.630075       1 server_linux.go:66] "Using iptables proxy"
	
	
	==> kube-scheduler [64763be05b36] <==
	
	
	==> kube-scheduler [8208c3578a46] <==
	I0407 12:59:02.854853       1 serving.go:386] Generated self-signed cert in-memory
	I0407 12:59:05.432310       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 12:59:05.432345       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 12:59:05.437841       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I0407 12:59:05.437850       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0407 12:59:05.437849       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 12:59:05.437903       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0407 12:59:05.437904       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:59:05.437904       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0407 12:59:05.438080       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 12:59:05.438116       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 12:59:05.538177       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0407 12:59:05.538199       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 12:59:05.538919       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Apr 07 13:07:37 functional-880043 kubelet[9782]: E0407 13:07:37.624030    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:07:51 functional-880043 kubelet[9782]: E0407 13:07:51.622067    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:07:51 functional-880043 kubelet[9782]: E0407 13:07:51.623753    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:07:51 functional-880043 kubelet[9782]: E0407 13:07:51.623823    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:08:02 functional-880043 kubelet[9782]: E0407 13:08:02.621912    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:08:05 functional-880043 kubelet[9782]: E0407 13:08:05.624394    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:08:06 functional-880043 kubelet[9782]: E0407 13:08:06.624099    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:08:15 functional-880043 kubelet[9782]: E0407 13:08:15.622152    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:08:17 functional-880043 kubelet[9782]: E0407 13:08:17.624286    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:08:19 functional-880043 kubelet[9782]: E0407 13:08:19.624786    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:08:26 functional-880043 kubelet[9782]: E0407 13:08:26.622556    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:08:30 functional-880043 kubelet[9782]: E0407 13:08:30.624145    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:08:32 functional-880043 kubelet[9782]: E0407 13:08:32.623449    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:08:39 functional-880043 kubelet[9782]: E0407 13:08:39.628575    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:08:43 functional-880043 kubelet[9782]: E0407 13:08:43.630150    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:08:46 functional-880043 kubelet[9782]: E0407 13:08:46.623897    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:08:54 functional-880043 kubelet[9782]: E0407 13:08:54.622158    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:08:57 functional-880043 kubelet[9782]: E0407 13:08:57.624021    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:09:00 functional-880043 kubelet[9782]: E0407 13:09:00.623515    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:09:08 functional-880043 kubelet[9782]: E0407 13:09:08.621849    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:09:12 functional-880043 kubelet[9782]: E0407 13:09:12.623446    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	Apr 07 13:09:12 functional-880043 kubelet[9782]: E0407 13:09:12.623471    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:09:22 functional-880043 kubelet[9782]: E0407 13:09:22.622585    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d70390cc-cf47-4135-b3ff-0cdfac11e46d"
	Apr 07 13:09:24 functional-880043 kubelet[9782]: E0407 13:09:24.624052    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="1af8e1d8-ba31-4299-ab10-0b4ca8b7c998"
	Apr 07 13:09:25 functional-880043 kubelet[9782]: E0407 13:09:25.624483    9782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-qdrzr" podUID="4580e356-0fb6-4c7b-9e35-1a2c89a735f8"
	
	
	==> kubernetes-dashboard [76358b95e354] <==
	2025/04/07 12:59:52 Starting overwatch
	2025/04/07 12:59:52 Using namespace: kubernetes-dashboard
	2025/04/07 12:59:52 Using in-cluster config to connect to apiserver
	2025/04/07 12:59:52 Using secret token for csrf signing
	2025/04/07 12:59:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/04/07 12:59:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/04/07 12:59:52 Successful initial request to the apiserver, version: v1.32.2
	2025/04/07 12:59:52 Generating JWE encryption key
	2025/04/07 12:59:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/04/07 12:59:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/04/07 12:59:53 Initializing JWE encryption key from synchronized object
	2025/04/07 12:59:53 Creating in-cluster Sidecar client
	2025/04/07 12:59:53 Successful request to sidecar
	2025/04/07 12:59:53 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [447c218e669e] <==
	I0407 12:59:05.966905       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:59:06.021696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:59:06.021833       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 12:59:23.419925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 12:59:23.419994       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef8e56d6-4c09-4f4d-8a93-95ca3a55bd16", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-880043_f379bb15-39f4-451c-897a-0bfeff9ba8dc became leader
	I0407 12:59:23.420075       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-880043_f379bb15-39f4-451c-897a-0bfeff9ba8dc!
	I0407 12:59:23.520369       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-880043_f379bb15-39f4-451c-897a-0bfeff9ba8dc!
	I0407 12:59:48.937960       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0407 12:59:48.938188       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bb9c8b56-54cd-4835-a9a2-20d165f175ea", APIVersion:"v1", ResourceVersion:"865", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0407 12:59:48.938077       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    5bdbe58d-4164-4e40-9128-ce068c3989cc 335 0 2025-04-07 12:57:33 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-04-07 12:57:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea &PersistentVolumeClaim{ObjectMeta:{myclaim  default  bb9c8b56-54cd-4835-a9a2-20d165f175ea 865 0 2025-04-07 12:59:48 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-04-07 12:59:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-04-07 12:59:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0407 12:59:48.938580       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea" provisioned
	I0407 12:59:48.938606       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0407 12:59:48.938617       1 volume_store.go:212] Trying to save persistentvolume "pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea"
	I0407 12:59:48.951195       1 volume_store.go:219] persistentvolume "pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea" saved
	I0407 12:59:48.951317       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"bb9c8b56-54cd-4835-a9a2-20d165f175ea", APIVersion:"v1", ResourceVersion:"865", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-bb9c8b56-54cd-4835-a9a2-20d165f175ea
	
	
	==> storage-provisioner [81b8568376d2] <==
	I0407 12:58:43.949893       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 12:58:43.957744       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 12:58:43.957897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-880043 -n functional-880043
helpers_test.go:261: (dbg) Run:  kubectl --context functional-880043 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-qdrzr nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-880043 describe pod busybox-mount mysql-58ccfd96bb-qdrzr nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-880043 describe pod busybox-mount mysql-58ccfd96bb-qdrzr nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://e189b5117e5a73300156d5f7933b1c40f66419aae8d3aec64ad93418158641a4
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 07 Apr 2025 12:59:35 +0000
	      Finished:     Mon, 07 Apr 2025 12:59:35 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pc95r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-pc95r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-880043
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.206s (2.542s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-qdrzr
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:33 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9rhrk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9rhrk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-58ccfd96bb-qdrzr to functional-880043
	  Warning  Failed     8m34s (x3 over 9m43s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m12s (x2 over 10m)    kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m12s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m57s (x19 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m43s (x20 over 10m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Normal   Pulling    4m29s (x6 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:43 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhsp7 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lhsp7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m52s                   default-scheduler  Successfully assigned default/nginx-svc to functional-880043
	  Warning  Failed     9m52s                   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m53s (x5 over 9m52s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     6m53s (x5 over 9m52s)   kubelet            Error: ErrImagePull
	  Warning  Failed     6m53s (x4 over 9m36s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m42s (x22 over 9m51s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m42s (x22 over 9m51s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-880043/192.168.49.2
	Start Time:       Mon, 07 Apr 2025 12:59:49 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wpd89 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wpd89:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/sp-pod to functional-880043
	  Normal   Pulling    6m40s (x5 over 9m45s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m40s (x5 over 9m43s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m40s (x5 over 9m43s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m34s (x21 over 9m43s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m34s (x21 over 9m43s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-880043 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1af8e1d8-ba31-4299-ab10-0b4ca8b7c998] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-880043 -n functional-880043
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-04-07 13:03:43.343990771 +0000 UTC m=+973.089514070
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-880043 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-880043 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-880043/192.168.49.2
Start Time:       Mon, 07 Apr 2025 12:59:43 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:  10.244.0.12
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lhsp7 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lhsp7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/nginx-svc to functional-880043
Warning  Failed     4m                    kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    61s (x5 over 4m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     61s (x5 over 4m)      kubelet            Error: ErrImagePull
Warning  Failed     61s (x4 over 3m44s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     21s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    10s (x16 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-880043 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-880043 logs nginx-svc -n default: exit status 1 (64.200666ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-880043 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (156.060103ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:361: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image load --daemon kicbase/echo-server:functional-880043 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-880043" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image load --daemon kicbase/echo-server:functional-880043 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-880043" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (138.649524ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:254: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image save kicbase/echo-server:functional-880043 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:403: expected "/home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:428: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0407 12:59:45.636117  828152 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:59:45.636276  828152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:45.636288  828152 out.go:358] Setting ErrFile to fd 2...
	I0407 12:59:45.636292  828152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:45.636513  828152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 12:59:45.637137  828152 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:59:45.637236  828152 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:59:45.637639  828152 cli_runner.go:164] Run: docker container inspect functional-880043 --format={{.State.Status}}
	I0407 12:59:45.655819  828152 ssh_runner.go:195] Run: systemctl --version
	I0407 12:59:45.655876  828152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-880043
	I0407 12:59:45.673876  828152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/functional-880043/id_rsa Username:docker}
	I0407 12:59:45.760652  828152 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar
	W0407 12:59:45.760736  828152 cache_images.go:253] Failed to load cached images for "functional-880043": loading images: stat /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar: no such file or directory
	I0407 12:59:45.760762  828152 cache_images.go:265] failed pushing to: functional-880043

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-880043
functional_test.go:436: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-880043: exit status 1 (17.22655ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-880043

                                                
                                                
** /stderr **
functional_test.go:438: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-880043

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (81.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0407 13:03:43.478538  773373 retry.go:31] will retry after 2.922430698s: Temporary Error: Get "http:": http: no Host in request URL
I0407 13:03:46.401431  773373 retry.go:31] will retry after 4.512805682s: Temporary Error: Get "http:": http: no Host in request URL
I0407 13:03:50.914847  773373 retry.go:31] will retry after 6.810985314s: Temporary Error: Get "http:": http: no Host in request URL
E0407 13:03:57.110685  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
I0407 13:03:57.726231  773373 retry.go:31] will retry after 12.376849714s: Temporary Error: Get "http:": http: no Host in request URL
I0407 13:04:10.103666  773373 retry.go:31] will retry after 13.768381834s: Temporary Error: Get "http:": http: no Host in request URL
I0407 13:04:23.873087  773373 retry.go:31] will retry after 13.033544163s: Temporary Error: Get "http:": http: no Host in request URL
I0407 13:04:36.907576  773373 retry.go:31] will retry after 28.342398861s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-880043 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.99.49.160   10.99.49.160   80:32670/TCP   5m22s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (81.83s)

                                                
                                    

Test pass (311/345)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.58
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 4.3
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.2
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.02
21 TestBinaryMirror 0.75
22 TestOffline 81.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 211.22
29 TestAddons/serial/Volcano 39.62
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.48
35 TestAddons/parallel/Registry 14.39
36 TestAddons/parallel/Ingress 19.27
37 TestAddons/parallel/InspektorGadget 11.13
38 TestAddons/parallel/MetricsServer 5.66
40 TestAddons/parallel/CSI 41.88
41 TestAddons/parallel/Headlamp 17
42 TestAddons/parallel/CloudSpanner 6.5
44 TestAddons/parallel/NvidiaDevicePlugin 5.53
45 TestAddons/parallel/Yakd 10.79
46 TestAddons/parallel/AmdGpuDevicePlugin 6.43
47 TestAddons/StoppedEnableDisable 10.91
48 TestCertOptions 24.8
49 TestCertExpiration 229.95
50 TestDockerFlags 27.12
51 TestForceSystemdFlag 53.22
52 TestForceSystemdEnv 38.58
54 TestKVMDriverInstallOrUpdate 1.46
58 TestErrorSpam/setup 21.93
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.91
61 TestErrorSpam/pause 1.23
62 TestErrorSpam/unpause 1.51
63 TestErrorSpam/stop 2
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 63.99
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.36
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.03
75 TestFunctional/serial/CacheCmd/cache/add_local 0.7
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.25
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 40.45
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.01
86 TestFunctional/serial/LogsFileCmd 1.01
87 TestFunctional/serial/InvalidService 4.05
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 9.52
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.03
97 TestFunctional/parallel/ServiceCmdConnect 9.58
98 TestFunctional/parallel/AddonsCmd 0.21
101 TestFunctional/parallel/SSHCmd 0.58
102 TestFunctional/parallel/CpCmd 1.79
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.7
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/DockerEnv/bash 1.2
115 TestFunctional/parallel/MountCmd/any-port 7.6
116 TestFunctional/parallel/ServiceCmd/DeployApp 9.17
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
118 TestFunctional/parallel/ProfileCmd/profile_list 0.46
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.47
122 TestFunctional/parallel/MountCmd/specific-port 1.94
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
124 TestFunctional/parallel/ServiceCmd/List 0.61
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
130 TestFunctional/parallel/ServiceCmd/Format 0.38
131 TestFunctional/parallel/ServiceCmd/URL 0.39
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.5
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 105.81
163 TestMultiControlPlane/serial/DeployApp 4.8
164 TestMultiControlPlane/serial/PingHostFromPods 1.09
165 TestMultiControlPlane/serial/AddWorkerNode 22.97
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
168 TestMultiControlPlane/serial/CopyFile 16.31
169 TestMultiControlPlane/serial/StopSecondaryNode 11.36
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
171 TestMultiControlPlane/serial/RestartSecondaryNode 40.69
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 233.21
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.38
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
176 TestMultiControlPlane/serial/StopCluster 32.54
177 TestMultiControlPlane/serial/RestartCluster 81.61
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 35.38
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
183 TestImageBuild/serial/Setup 22.89
184 TestImageBuild/serial/NormalBuild 0.9
185 TestImageBuild/serial/BuildWithBuildArg 0.67
186 TestImageBuild/serial/BuildWithDockerIgnore 0.54
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.46
191 TestJSONOutput/start/Command 60.49
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.54
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.43
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 10.94
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.21
216 TestKicCustomNetwork/create_custom_network 23.9
217 TestKicCustomNetwork/use_default_bridge_network 23.97
218 TestKicExistingNetwork 25.94
219 TestKicCustomSubnet 23.38
220 TestKicStaticIP 24.73
221 TestMainNoArgs 0.05
222 TestMinikubeProfile 53.32
225 TestMountStart/serial/StartWithMountFirst 9.35
226 TestMountStart/serial/VerifyMountFirst 0.24
227 TestMountStart/serial/StartWithMountSecond 6.69
228 TestMountStart/serial/VerifyMountSecond 0.25
229 TestMountStart/serial/DeleteFirst 1.46
230 TestMountStart/serial/VerifyMountPostDelete 0.25
231 TestMountStart/serial/Stop 1.17
232 TestMountStart/serial/RestartStopped 7.64
233 TestMountStart/serial/VerifyMountPostStop 0.25
236 TestMultiNode/serial/FreshStart2Nodes 72.88
237 TestMultiNode/serial/DeployApp2Nodes 41.42
238 TestMultiNode/serial/PingHostFrom2Pods 0.75
239 TestMultiNode/serial/AddNode 15.38
240 TestMultiNode/serial/MultiNodeLabels 0.06
241 TestMultiNode/serial/ProfileList 0.62
242 TestMultiNode/serial/CopyFile 9.09
243 TestMultiNode/serial/StopNode 2.13
244 TestMultiNode/serial/StartAfterStop 10.03
245 TestMultiNode/serial/RestartKeepsNodes 77.55
246 TestMultiNode/serial/DeleteNode 5.02
247 TestMultiNode/serial/StopMultiNode 21.4
248 TestMultiNode/serial/RestartMultiNode 53.9
249 TestMultiNode/serial/ValidateNameConflict 23.76
254 TestPreload 93.07
256 TestScheduledStopUnix 94.8
257 TestSkaffold 97.85
259 TestInsufficientStorage 12.8
260 TestRunningBinaryUpgrade 86.35
262 TestKubernetesUpgrade 330.95
263 TestMissingContainerUpgrade 132.14
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
266 TestNoKubernetes/serial/StartWithK8s 35.79
267 TestNoKubernetes/serial/StartWithStopK8s 16.71
268 TestNoKubernetes/serial/Start 8.5
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
270 TestNoKubernetes/serial/ProfileList 4.38
271 TestNoKubernetes/serial/Stop 4.08
272 TestNoKubernetes/serial/StartNoArgs 6.91
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
285 TestStoppedBinaryUpgrade/Setup 0.36
286 TestStoppedBinaryUpgrade/Upgrade 71.66
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
296 TestPause/serial/Start 64.68
297 TestPause/serial/SecondStartNoReconfiguration 33.02
298 TestNetworkPlugins/group/auto/Start 31.13
299 TestNetworkPlugins/group/auto/KubeletFlags 0.27
300 TestNetworkPlugins/group/auto/NetCatPod 9.22
301 TestPause/serial/Pause 0.56
302 TestPause/serial/VerifyStatus 0.32
303 TestPause/serial/Unpause 0.53
304 TestPause/serial/PauseAgain 0.6
305 TestPause/serial/DeletePaused 2.13
306 TestPause/serial/VerifyDeletedResources 0.74
307 TestNetworkPlugins/group/kindnet/Start 56.36
308 TestNetworkPlugins/group/auto/DNS 21.35
309 TestNetworkPlugins/group/auto/Localhost 0.11
310 TestNetworkPlugins/group/auto/HairPin 0.11
311 TestNetworkPlugins/group/calico/Start 60.97
312 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
313 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
314 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
315 TestNetworkPlugins/group/kindnet/DNS 0.15
316 TestNetworkPlugins/group/kindnet/Localhost 0.14
317 TestNetworkPlugins/group/kindnet/HairPin 0.14
318 TestNetworkPlugins/group/custom-flannel/Start 49.12
319 TestNetworkPlugins/group/calico/ControllerPod 6.01
320 TestNetworkPlugins/group/calico/KubeletFlags 0.28
321 TestNetworkPlugins/group/calico/NetCatPod 9.2
322 TestNetworkPlugins/group/calico/DNS 0.16
323 TestNetworkPlugins/group/calico/Localhost 0.17
324 TestNetworkPlugins/group/calico/HairPin 0.16
325 TestNetworkPlugins/group/false/Start 71.12
326 TestNetworkPlugins/group/flannel/Start 46.68
327 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
328 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
329 TestNetworkPlugins/group/bridge/Start 40.39
330 TestNetworkPlugins/group/custom-flannel/DNS 0.17
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
333 TestNetworkPlugins/group/kubenet/Start 62.49
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
335 TestNetworkPlugins/group/bridge/NetCatPod 9.2
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
338 TestNetworkPlugins/group/flannel/NetCatPod 8.24
339 TestNetworkPlugins/group/false/KubeletFlags 0.34
340 TestNetworkPlugins/group/false/NetCatPod 10.28
341 TestNetworkPlugins/group/bridge/DNS 0.21
342 TestNetworkPlugins/group/bridge/Localhost 0.14
343 TestNetworkPlugins/group/bridge/HairPin 0.16
344 TestNetworkPlugins/group/flannel/DNS 0.17
345 TestNetworkPlugins/group/flannel/Localhost 0.13
346 TestNetworkPlugins/group/flannel/HairPin 0.13
347 TestNetworkPlugins/group/false/DNS 0.15
348 TestNetworkPlugins/group/false/Localhost 0.13
349 TestNetworkPlugins/group/false/HairPin 0.12
350 TestNetworkPlugins/group/enable-default-cni/Start 43.37
352 TestStartStop/group/old-k8s-version/serial/FirstStart 133.31
354 TestStartStop/group/no-preload/serial/FirstStart 83.45
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
356 TestNetworkPlugins/group/kubenet/NetCatPod 9.2
357 TestNetworkPlugins/group/kubenet/DNS 0.15
358 TestNetworkPlugins/group/kubenet/Localhost 0.14
359 TestNetworkPlugins/group/kubenet/HairPin 0.14
360 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
361 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
363 TestStartStop/group/embed-certs/serial/FirstStart 67.96
364 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
365 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
366 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
368 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.34
369 TestStartStop/group/no-preload/serial/DeployApp 9.25
370 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
371 TestStartStop/group/no-preload/serial/Stop 10.89
372 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/no-preload/serial/SecondStart 263.25
374 TestStartStop/group/embed-certs/serial/DeployApp 8.25
375 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.01
376 TestStartStop/group/embed-certs/serial/Stop 10.77
377 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
378 TestStartStop/group/embed-certs/serial/SecondStart 263.6
379 TestStartStop/group/old-k8s-version/serial/DeployApp 9.5
380 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.28
381 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
382 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.49
383 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1
384 TestStartStop/group/old-k8s-version/serial/Stop 11.02
385 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
386 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.58
387 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
388 TestStartStop/group/old-k8s-version/serial/SecondStart 137.58
389 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
392 TestStartStop/group/old-k8s-version/serial/Pause 2.55
394 TestStartStop/group/newest-cni/serial/FirstStart 28.88
395 TestStartStop/group/newest-cni/serial/DeployApp 0
396 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
397 TestStartStop/group/newest-cni/serial/Stop 10.82
398 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
399 TestStartStop/group/newest-cni/serial/SecondStart 14.26
400 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
401 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
403 TestStartStop/group/newest-cni/serial/Pause 2.65
404 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
406 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
407 TestStartStop/group/no-preload/serial/Pause 2.47
408 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
409 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
410 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
411 TestStartStop/group/embed-certs/serial/Pause 2.46
412 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
413 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
414 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
415 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.37
x
+
TestDownloadOnly/v1.20.0/json-events (4.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-816551 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-816551 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.578998063s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:47:34.872367  773373 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0407 12:47:34.872469  773373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-816551
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-816551: exit status 85 (60.322002ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-816551 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC |          |
	|         | -p download-only-816551        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:47:30
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:47:30.333915  773385 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:47:30.334355  773385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:30.334408  773385 out.go:358] Setting ErrFile to fd 2...
	I0407 12:47:30.334421  773385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:30.334898  773385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	W0407 12:47:30.335094  773385 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20598-766623/.minikube/config/config.json: open /home/jenkins/minikube-integration/20598-766623/.minikube/config/config.json: no such file or directory
	I0407 12:47:30.335762  773385 out.go:352] Setting JSON to true
	I0407 12:47:30.336749  773385 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":73799,"bootTime":1743956251,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:47:30.336860  773385 start.go:139] virtualization: kvm guest
	I0407 12:47:30.338846  773385 out.go:97] [download-only-816551] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0407 12:47:30.338972  773385 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball: no such file or directory
	I0407 12:47:30.339004  773385 notify.go:220] Checking for updates...
	I0407 12:47:30.340304  773385 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:47:30.341466  773385 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:47:30.342507  773385 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:47:30.343526  773385 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	I0407 12:47:30.344611  773385 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:47:30.346592  773385 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:47:30.346854  773385 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:47:30.369996  773385 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:47:30.370092  773385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:30.739890  773385 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:47:30.730086654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:47:30.739998  773385 docker.go:318] overlay module found
	I0407 12:47:30.741559  773385 out.go:97] Using the docker driver based on user configuration
	I0407 12:47:30.741603  773385 start.go:297] selected driver: docker
	I0407 12:47:30.741610  773385 start.go:901] validating driver "docker" against <nil>
	I0407 12:47:30.741701  773385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:30.792278  773385 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:47:30.784075495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:47:30.792505  773385 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:47:30.793062  773385 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0407 12:47:30.793206  773385 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:47:30.794948  773385 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-816551 host does not exist
	  To start a cluster, run: "minikube start -p download-only-816551"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-816551
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (4.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-691167 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-691167 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.303318229s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (4.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:47:39.570414  773373 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:47:39.570475  773373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-766623/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-691167
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-691167: exit status 85 (59.353999ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-816551 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC |                     |
	|         | -p download-only-816551        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:47 UTC |
	| delete  | -p download-only-816551        | download-only-816551 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC | 07 Apr 25 12:47 UTC |
	| start   | -o=json --download-only        | download-only-691167 | jenkins | v1.35.0 | 07 Apr 25 12:47 UTC |                     |
	|         | -p download-only-691167        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:47:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.24.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:47:35.308620  773727 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:47:35.308893  773727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:35.308903  773727 out.go:358] Setting ErrFile to fd 2...
	I0407 12:47:35.308908  773727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:47:35.309157  773727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 12:47:35.309756  773727 out.go:352] Setting JSON to true
	I0407 12:47:35.310725  773727 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":73804,"bootTime":1743956251,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:47:35.310835  773727 start.go:139] virtualization: kvm guest
	I0407 12:47:35.312597  773727 out.go:97] [download-only-691167] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:47:35.312761  773727 notify.go:220] Checking for updates...
	I0407 12:47:35.313883  773727 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:47:35.315014  773727 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:47:35.316109  773727 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:47:35.317164  773727 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	I0407 12:47:35.318284  773727 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0407 12:47:35.320268  773727 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:47:35.320470  773727 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:47:35.342657  773727 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:47:35.342773  773727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:35.388309  773727 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-04-07 12:47:35.379509697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:47:35.388409  773727 docker.go:318] overlay module found
	I0407 12:47:35.389816  773727 out.go:97] Using the docker driver based on user configuration
	I0407 12:47:35.389842  773727 start.go:297] selected driver: docker
	I0407 12:47:35.389847  773727 start.go:901] validating driver "docker" against <nil>
	I0407 12:47:35.389917  773727 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:47:35.437026  773727 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-04-07 12:47:35.428417845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:47:35.437222  773727 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:47:35.437703  773727 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0407 12:47:35.437836  773727 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:47:35.439704  773727 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-691167 host does not exist
	  To start a cluster, run: "minikube start -p download-only-691167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-691167
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.02s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-089924 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-089924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-089924
--- PASS: TestDownloadOnlyKic (1.02s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:47:41.224668  773373 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-222869 --alsologtostderr --binary-mirror http://127.0.0.1:43139 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-222869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-222869
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (81.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-803744 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-803744 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m19.766137265s)
helpers_test.go:175: Cleaning up "offline-docker-803744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-803744
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-803744: (2.193344888s)
--- PASS: TestOffline (81.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-662808
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-662808: exit status 85 (55.66757ms)

                                                
                                                
-- stdout --
	* Profile "addons-662808" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-662808"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-662808
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-662808: exit status 85 (54.088988ms)

                                                
                                                
-- stdout --
	* Profile "addons-662808" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-662808"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (211.22s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-662808 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-662808 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m31.21822175s)
--- PASS: TestAddons/Setup (211.22s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.62s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 13.269565ms
addons_test.go:815: volcano-admission stabilized in 14.232617ms
addons_test.go:823: volcano-controller stabilized in 14.462824ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-8kv76" [4d366062-f19c-4d6c-9056-884f4832c380] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003268903s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-85pf9" [d6baa9eb-fa69-4d48-8350-593020000e10] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003741925s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-x4ktv" [d1ef9ccc-50d7-4c70-b9c7-b928e35eb033] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003601603s
addons_test.go:842: (dbg) Run:  kubectl --context addons-662808 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-662808 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-662808 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2f914925-212b-4418-9578-0a8cc5eb7b03] Pending
helpers_test.go:344: "test-job-nginx-0" [2f914925-212b-4418-9578-0a8cc5eb7b03] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2f914925-212b-4418-9578-0a8cc5eb7b03] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003769819s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable volcano --alsologtostderr -v=1: (11.261917421s)
--- PASS: TestAddons/serial/Volcano (39.62s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-662808 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-662808 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-662808 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-662808 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [04ea121b-ee9d-496e-a3fa-0ee89a47776b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [04ea121b-ee9d-496e-a3fa-0ee89a47776b] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00401596s
addons_test.go:633: (dbg) Run:  kubectl --context addons-662808 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-662808 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-662808 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 27.41853ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-g6r5h" [b30d0273-c82f-46a6-a761-fd905b1d3783] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.042402231s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vqvmc" [9829d87d-bb8e-4c3d-b885-03deb72b4409] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002767633s
addons_test.go:331: (dbg) Run:  kubectl --context addons-662808 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-662808 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-662808 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.570924539s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 ip
2025/04/07 12:52:24 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-662808 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-662808 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-662808 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8d7c9cc6-075d-4984-a7b4-83a8288ce6ca] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8d7c9cc6-075d-4984-a7b4-83a8288ce6ca] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003428951s
I0407 12:52:34.187493  773373 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-662808 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable ingress-dns --alsologtostderr -v=1: (1.182078162s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable ingress --alsologtostderr -v=1: (7.689592857s)
--- PASS: TestAddons/parallel/Ingress (19.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zk8zc" [f8b01843-7293-40e4-b715-79de427b417d] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004611174s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable inspektor-gadget --alsologtostderr -v=1: (6.123783496s)
--- PASS: TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.483791ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-5bqmp" [d86b42f2-cba5-4d53-8277-99e8dc49f20f] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003904829s
addons_test.go:402: (dbg) Run:  kubectl --context addons-662808 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:52:10.265061  773373 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.924897ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-662808 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-662808 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3036a795-d91e-47d3-8ac0-20929c6c640e] Pending
helpers_test.go:344: "task-pv-pod" [3036a795-d91e-47d3-8ac0-20929c6c640e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3036a795-d91e-47d3-8ac0-20929c6c640e] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004350384s
addons_test.go:511: (dbg) Run:  kubectl --context addons-662808 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-662808 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-662808 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-662808 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-662808 delete pod task-pv-pod: (1.167726029s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-662808 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-662808 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-662808 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-662808 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [244ca5cb-ddf6-4d51-bd6d-7dd83e4df20e] Pending
helpers_test.go:344: "task-pv-pod-restore" [244ca5cb-ddf6-4d51-bd6d-7dd83e4df20e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003457244s
addons_test.go:553: (dbg) Run:  kubectl --context addons-662808 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-662808 delete pod task-pv-pod-restore: (1.007191898s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-662808 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-662808 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.603586965s)
--- PASS: TestAddons/parallel/CSI (41.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-662808 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-zrjkq" [bd915f4b-df42-42fc-bfae-c4be1ee8d93e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-zrjkq" [bd915f4b-df42-42fc-bfae-c4be1ee8d93e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003656602s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable headlamp --alsologtostderr -v=1: (6.208149318s)
--- PASS: TestAddons/parallel/Headlamp (17.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-jrtfm" [d116e847-444d-49ce-9119-38ab12c575c5] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00277429s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rv6cl" [37f5078d-e3a7-43d5-a718-db741b45b741] Running
I0407 12:52:10.267910  773373 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:52:10.267959  773373 kapi.go:107] duration metric: took 2.911652ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.07078331s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-z4wq4" [8cdc4a29-8ccf-4c96-b27b-ed67cc0ef0ea] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.029183971s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-662808 addons disable yakd --alsologtostderr -v=1: (5.763381365s)
--- PASS: TestAddons/parallel/Yakd (10.79s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-l66rh" [a5fb26c5-73e6-4735-a212-e1b9c91e7d5c] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.00339603s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-662808 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-662808
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-662808: (10.651505194s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-662808
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-662808
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-662808
--- PASS: TestAddons/StoppedEnableDisable (10.91s)

                                                
                                    
x
+
TestCertOptions (24.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-502515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-502515 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (22.07082991s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-502515 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-502515 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-502515 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-502515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-502515
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-502515: (2.162377811s)
--- PASS: TestCertOptions (24.80s)

                                                
                                    
x
+
TestCertExpiration (229.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-421136 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-421136 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (25.03851145s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-421136 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-421136 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (22.388979525s)
helpers_test.go:175: Cleaning up "cert-expiration-421136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-421136
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-421136: (2.519375804s)
--- PASS: TestCertExpiration (229.95s)

                                                
                                    
x
+
TestDockerFlags (27.12s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-145946 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-145946 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.422809366s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-145946 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-145946 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-145946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-145946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-145946: (2.09335902s)
--- PASS: TestDockerFlags (27.12s)

                                                
                                    
x
+
TestForceSystemdFlag (53.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-843632 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-843632 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (50.565318052s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-843632 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-843632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-843632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-843632: (2.3209147s)
--- PASS: TestForceSystemdFlag (53.22s)

                                                
                                    
x
+
TestForceSystemdEnv (38.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-397308 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-397308 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.995284359s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-397308 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-397308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-397308
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-397308: (2.231033181s)
--- PASS: TestForceSystemdEnv (38.58s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.46s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0407 13:36:51.171952  773373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:36:51.172115  773373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0407 13:36:51.206929  773373 install.go:62] docker-machine-driver-kvm2: exit status 1
W0407 13:36:51.207072  773373 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:36:51.207120  773373 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2923789789/001/docker-machine-driver-kvm2
I0407 13:36:51.360326  773373 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2923789789/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00072dbf8 gz:0xc00072dc80 tar:0xc00072dc30 tar.bz2:0xc00072dc40 tar.gz:0xc00072dc50 tar.xz:0xc00072dc60 tar.zst:0xc00072dc70 tbz2:0xc00072dc40 tgz:0xc00072dc50 txz:0xc00072dc60 tzst:0xc00072dc70 xz:0xc00072dc88 zip:0xc00072dc90 zst:0xc00072dca0] Getters:map[file:0xc0016297d0 http:0xc00048ba40 https:0xc00048bbd0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0407 13:36:51.360390  773373 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2923789789/001/docker-machine-driver-kvm2
I0407 13:36:52.051235  773373 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:36:52.051322  773373 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0407 13:36:52.096547  773373 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0407 13:36:52.096590  773373 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0407 13:36:52.096679  773373 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0407 13:36:52.096724  773373 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2923789789/002/docker-machine-driver-kvm2
I0407 13:36:52.126982  773373 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2923789789/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940 0x554c940] Decompressors:map[bz2:0xc00072dbf8 gz:0xc00072dc80 tar:0xc00072dc30 tar.bz2:0xc00072dc40 tar.gz:0xc00072dc50 tar.xz:0xc00072dc60 tar.zst:0xc00072dc70 tbz2:0xc00072dc40 tgz:0xc00072dc50 txz:0xc00072dc60 tzst:0xc00072dc70 xz:0xc00072dc88 zip:0xc00072dc90 zst:0xc00072dca0] Getters:map[file:0xc000559a60 http:0xc00095f3b0 https:0xc00095f400] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0407 13:36:52.127036  773373 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2923789789/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.46s)

                                                
                                    
x
+
TestErrorSpam/setup (21.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-769949 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-769949 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-769949 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-769949 --driver=docker  --container-runtime=docker: (21.932974922s)
--- PASS: TestErrorSpam/setup (21.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 pause
--- PASS: TestErrorSpam/pause (1.23s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 stop: (1.811245577s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-769949 --log_dir /tmp/nospam-769949 stop
--- PASS: TestErrorSpam/stop (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20598-766623/.minikube/files/etc/test/nested/copy/773373/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880043 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-880043 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m3.991491369s)
--- PASS: TestFunctional/serial/StartWithProxy (63.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 12:58:09.502328  773373 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880043 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-880043 --alsologtostderr -v=8: (29.354733741s)
functional_test.go:680: soft start took 29.355496475s for "functional-880043" cluster.
I0407 12:58:38.857492  773373 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (29.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-880043 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-880043 /tmp/TestFunctionalserialCacheCmdcacheadd_local3585588186/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cache add minikube-local-cache-test:functional-880043
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cache delete minikube-local-cache-test:functional-880043
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-880043
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (273.282466ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 kubectl -- --context functional-880043 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-880043 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880043 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-880043 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.445997335s)
functional_test.go:778: restart took 40.446154789s for "functional-880043" cluster.
I0407 12:59:24.129796  773373 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (40.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-880043 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-880043 logs: (1.014303924s)
--- PASS: TestFunctional/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 logs --file /tmp/TestFunctionalserialLogsFileCmd1763110002/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-880043 logs --file /tmp/TestFunctionalserialLogsFileCmd1763110002/001/logs.txt: (1.013497109s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-880043 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-880043
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-880043: exit status 115 (324.371265ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30655 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-880043 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 config get cpus: exit status 14 (78.996238ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 config get cpus: exit status 14 (53.336438ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-880043 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-880043 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 828370: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880043 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-880043 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (189.452372ms)

                                                
                                                
-- stdout --
	* [functional-880043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:59:43.432414  827378 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:59:43.432549  827378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:43.432566  827378 out.go:358] Setting ErrFile to fd 2...
	I0407 12:59:43.432572  827378 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:43.432818  827378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 12:59:43.433510  827378 out.go:352] Setting JSON to false
	I0407 12:59:43.435360  827378 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":74532,"bootTime":1743956251,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:59:43.435477  827378 start.go:139] virtualization: kvm guest
	I0407 12:59:43.437833  827378 out.go:177] * [functional-880043] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0407 12:59:43.439410  827378 notify.go:220] Checking for updates...
	I0407 12:59:43.439459  827378 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:59:43.441894  827378 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:59:43.443461  827378 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:59:43.444834  827378 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	I0407 12:59:43.446177  827378 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:59:43.447638  827378 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:59:43.449300  827378 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:59:43.449933  827378 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:59:43.477441  827378 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:59:43.477562  827378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:59:43.547080  827378 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:59:43.535734482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:59:43.547188  827378 docker.go:318] overlay module found
	I0407 12:59:43.549951  827378 out.go:177] * Using the docker driver based on existing profile
	I0407 12:59:43.551280  827378 start.go:297] selected driver: docker
	I0407 12:59:43.551298  827378 start.go:901] validating driver "docker" against &{Name:functional-880043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-880043 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:43.551468  827378 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:59:43.553949  827378 out.go:201] 
	W0407 12:59:43.555480  827378 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0407 12:59:43.557024  827378 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880043 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-880043 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-880043 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (175.106608ms)

                                                
                                                
-- stdout --
	* [functional-880043] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:59:42.238051  826293 out.go:345] Setting OutFile to fd 1 ...
	I0407 12:59:42.238314  826293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:42.238327  826293 out.go:358] Setting ErrFile to fd 2...
	I0407 12:59:42.238334  826293 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:59:42.238751  826293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 12:59:42.239495  826293 out.go:352] Setting JSON to false
	I0407 12:59:42.241277  826293 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":74531,"bootTime":1743956251,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0407 12:59:42.241345  826293 start.go:139] virtualization: kvm guest
	I0407 12:59:42.243297  826293 out.go:177] * [functional-880043] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0407 12:59:42.244697  826293 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:59:42.244697  826293 notify.go:220] Checking for updates...
	I0407 12:59:42.247535  826293 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:59:42.248992  826293 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	I0407 12:59:42.250219  826293 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	I0407 12:59:42.251550  826293 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0407 12:59:42.252747  826293 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:59:42.254650  826293 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:59:42.255355  826293 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:59:42.284427  826293 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
	I0407 12:59:42.284571  826293 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 12:59:42.342361  826293 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-04-07 12:59:42.330785125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 12:59:42.342474  826293 docker.go:318] overlay module found
	I0407 12:59:42.343979  826293 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0407 12:59:42.345316  826293 start.go:297] selected driver: docker
	I0407 12:59:42.345357  826293 start.go:901] validating driver "docker" against &{Name:functional-880043 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-880043 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:42.345501  826293 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:59:42.347866  826293 out.go:201] 
	W0407 12:59:42.349308  826293 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0407 12:59:42.350890  826293 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-880043 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-880043 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-pb5x5" [b059ab7c-5910-4fae-8291-1d0a2da905e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-pb5x5" [b059ab7c-5910-4fae-8291-1d0a2da905e1] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003577241s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31866
functional_test.go:1692: http://192.168.49.2:31866: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-pb5x5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31866
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh -n functional-880043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cp functional-880043:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1010498954/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh -n functional-880043 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh -n functional-880043 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/773373/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo cat /etc/test/nested/copy/773373/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/773373.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo cat /etc/ssl/certs/773373.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/773373.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo cat /usr/share/ca-certificates/773373.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/7733732.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo cat /etc/ssl/certs/7733732.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/7733732.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo cat /usr/share/ca-certificates/7733732.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-880043 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 ssh "sudo systemctl is-active crio": exit status 1 (286.318695ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-880043 docker-env) && out/minikube-linux-amd64 status -p functional-880043"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-880043 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdany-port1247770489/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1744030770276657808" to /tmp/TestFunctionalparallelMountCmdany-port1247770489/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1744030770276657808" to /tmp/TestFunctionalparallelMountCmdany-port1247770489/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1744030770276657808" to /tmp/TestFunctionalparallelMountCmdany-port1247770489/001/test-1744030770276657808
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.994656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:59:30.644007  773373 retry.go:31] will retry after 252.020679ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr  7 12:59 created-by-test
-rw-r--r-- 1 docker docker 24 Apr  7 12:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr  7 12:59 test-1744030770276657808
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh cat /mount-9p/test-1744030770276657808
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-880043 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dda1e6f6-ff86-4475-801c-5aab7a4d25b8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dda1e6f6-ff86-4475-801c-5aab7a4d25b8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dda1e6f6-ff86-4475-801c-5aab7a4d25b8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00308565s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-880043 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdany-port1247770489/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-880043 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-880043 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-64qb4" [139c7d72-d04e-45b8-b6c6-138b22cadaf7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-64qb4" [139c7d72-d04e-45b8-b6c6-138b22cadaf7] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004774099s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "405.505515ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "57.821779ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "369.443988ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "56.12442ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdspecific-port431584182/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.649771ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:59:38.134395  773373 retry.go:31] will retry after 713.476591ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdspecific-port431584182/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 ssh "sudo umount -f /mount-9p": exit status 1 (250.979087ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-880043 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdspecific-port431584182/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup646300414/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup646300414/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup646300414/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T" /mount1: exit status 1 (346.395635ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0407 12:59:40.160989  773373 retry.go:31] will retry after 594.622624ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-880043 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup646300414/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup646300414/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-880043 /tmp/TestFunctionalparallelMountCmdVerifyCleanup646300414/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 service list -o json
functional_test.go:1511: Took "502.508697ms" to run "out/minikube-linux-amd64 -p functional-880043 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:32093
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:32093
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-880043 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-880043 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-880043 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-880043 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 826849: os: process already finished
helpers_test.go:502: unable to terminate pid 826479: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-880043 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880043 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-880043
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880043 image ls --format short --alsologtostderr:
I0407 12:59:56.262661  829331 out.go:345] Setting OutFile to fd 1 ...
I0407 12:59:56.262967  829331 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:56.262979  829331 out.go:358] Setting ErrFile to fd 2...
I0407 12:59:56.262985  829331 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:56.263206  829331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
I0407 12:59:56.263883  829331 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:56.264009  829331 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:56.264409  829331 cli_runner.go:164] Run: docker container inspect functional-880043 --format={{.State.Status}}
I0407 12:59:56.282760  829331 ssh_runner.go:195] Run: systemctl --version
I0407 12:59:56.282828  829331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-880043
I0407 12:59:56.300858  829331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/functional-880043/id_rsa Username:docker}
I0407 12:59:56.388072  829331 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880043 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| localhost/my-image                          | functional-880043 | 6c6582c9b0030 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-880043 | 7f8b32cb9b07e | 30B    |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-proxy                  | v1.32.2           | f1332858868e1 | 94MB   |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.32.2           | d8e673e7c9983 | 69.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/kube-apiserver              | v1.32.2           | 85b7a174738ba | 97MB   |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | b6a454c5a800d | 89.7MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880043 image ls --format table --alsologtostderr:
I0407 12:59:59.364386  829832 out.go:345] Setting OutFile to fd 1 ...
I0407 12:59:59.364540  829832 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:59.364551  829832 out.go:358] Setting ErrFile to fd 2...
I0407 12:59:59.364558  829832 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:59.364783  829832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
I0407 12:59:59.365438  829832 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:59.365565  829832 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:59.365950  829832 cli_runner.go:164] Run: docker container inspect functional-880043 --format={{.State.Status}}
I0407 12:59:59.383810  829832 ssh_runner.go:195] Run: systemctl --version
I0407 12:59:59.383870  829832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-880043
I0407 12:59:59.401566  829832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/functional-880043/id_rsa Username:docker}
I0407 12:59:59.488396  829832 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0407 13:01:13.251164  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:13.257567  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:13.268991  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:13.290434  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:13.331914  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:13.413458  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:13.574744  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:13.897427  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:14.538805  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:15.820511  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:18.381906  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:23.503245  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:33.745086  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:01:54.226540  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:02:35.188521  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880043 image ls --format json --alsologtostderr:
[{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6c6582c9b00300b61b9a109aad5cebbd03a8a816b2b5a6088e02b786a0253799",
"repoDigests":[],"repoTags":["localhost/my-image:functional-880043"],"size":"1240000"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"97000000"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"94000000"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"89700000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags
":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7f8b32cb9b07e13f72da09f551b7dc24a32e969090ad5a922fb057a804e171fc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-880043"],"size":"30"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"69600000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880043 image ls --format json --alsologtostderr:
I0407 12:59:59.161766  829783 out.go:345] Setting OutFile to fd 1 ...
I0407 12:59:59.162038  829783 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:59.162049  829783 out.go:358] Setting ErrFile to fd 2...
I0407 12:59:59.162054  829783 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:59.162256  829783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
I0407 12:59:59.163752  829783 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:59.164251  829783 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:59.164648  829783 cli_runner.go:164] Run: docker container inspect functional-880043 --format={{.State.Status}}
I0407 12:59:59.183247  829783 ssh_runner.go:195] Run: systemctl --version
I0407 12:59:59.183296  829783 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-880043
I0407 12:59:59.201540  829783 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/functional-880043/id_rsa Username:docker}
I0407 12:59:59.288401  829783 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-880043 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "69600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8b32cb9b07e13f72da09f551b7dc24a32e969090ad5a922fb057a804e171fc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-880043
size: "30"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "89700000"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "94000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "97000000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880043 image ls --format yaml --alsologtostderr:
I0407 12:59:56.462211  829379 out.go:345] Setting OutFile to fd 1 ...
I0407 12:59:56.462333  829379 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:56.462342  829379 out.go:358] Setting ErrFile to fd 2...
I0407 12:59:56.462346  829379 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:56.462545  829379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
I0407 12:59:56.463167  829379 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:56.463263  829379 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:56.463728  829379 cli_runner.go:164] Run: docker container inspect functional-880043 --format={{.State.Status}}
I0407 12:59:56.482458  829379 ssh_runner.go:195] Run: systemctl --version
I0407 12:59:56.482503  829379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-880043
I0407 12:59:56.500159  829379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/functional-880043/id_rsa Username:docker}
I0407 12:59:56.587961  829379 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-880043 ssh pgrep buildkitd: exit status 1 (250.664141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image build -t localhost/my-image:functional-880043 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-880043 image build -t localhost/my-image:functional-880043 testdata/build --alsologtostderr: (2.041194532s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-880043 image build -t localhost/my-image:functional-880043 testdata/build --alsologtostderr:
I0407 12:59:56.914424  829522 out.go:345] Setting OutFile to fd 1 ...
I0407 12:59:56.914562  829522 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:56.914575  829522 out.go:358] Setting ErrFile to fd 2...
I0407 12:59:56.914588  829522 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:59:56.914802  829522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
I0407 12:59:56.915471  829522 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:56.916068  829522 config.go:182] Loaded profile config "functional-880043": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:59:56.916478  829522 cli_runner.go:164] Run: docker container inspect functional-880043 --format={{.State.Status}}
I0407 12:59:56.934459  829522 ssh_runner.go:195] Run: systemctl --version
I0407 12:59:56.934536  829522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-880043
I0407 12:59:56.952084  829522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/functional-880043/id_rsa Username:docker}
I0407 12:59:57.044286  829522 build_images.go:161] Building image from path: /tmp/build.2549979078.tar
I0407 12:59:57.044431  829522 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 12:59:57.053630  829522 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2549979078.tar
I0407 12:59:57.057011  829522 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2549979078.tar: stat -c "%s %y" /var/lib/minikube/build/build.2549979078.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2549979078.tar': No such file or directory
I0407 12:59:57.057041  829522 ssh_runner.go:362] scp /tmp/build.2549979078.tar --> /var/lib/minikube/build/build.2549979078.tar (3072 bytes)
I0407 12:59:57.081029  829522 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2549979078
I0407 12:59:57.089922  829522 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2549979078 -xf /var/lib/minikube/build/build.2549979078.tar
I0407 12:59:57.098727  829522 docker.go:360] Building image: /var/lib/minikube/build/build.2549979078
I0407 12:59:57.098798  829522 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-880043 /var/lib/minikube/build/build.2549979078
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:6c6582c9b00300b61b9a109aad5cebbd03a8a816b2b5a6088e02b786a0253799 done
#8 naming to localhost/my-image:functional-880043 done
#8 DONE 0.0s
I0407 12:59:58.885607  829522 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-880043 /var/lib/minikube/build/build.2549979078: (1.786784868s)
I0407 12:59:58.885683  829522 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2549979078
I0407 12:59:58.894615  829522 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2549979078.tar
I0407 12:59:58.903297  829522 build_images.go:217] Built localhost/my-image:functional-880043 from /tmp/build.2549979078.tar
I0407 12:59:58.903332  829522 build_images.go:133] succeeded building to: functional-880043
I0407 12:59:58.903336  829522 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image rm kicbase/echo-server:functional-880043 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-880043 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-880043 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
E0407 13:06:13.250240  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:06:40.952177  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-880043
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-880043
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-880043
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (105.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-208854 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 13:11:13.250675  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-208854 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m45.112522861s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (105.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-208854 -- rollout status deployment/busybox: (2.721145892s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-448mv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-m5skx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-mzjhd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-448mv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-m5skx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-mzjhd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-448mv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-m5skx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-mzjhd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-448mv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-448mv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-m5skx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-m5skx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-mzjhd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-208854 -- exec busybox-58667487b6-mzjhd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-208854 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-208854 -v=7 --alsologtostderr: (22.116113508s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-208854 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp testdata/cp-test.txt ha-208854:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile524923838/001/cp-test_ha-208854.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854:/home/docker/cp-test.txt ha-208854-m02:/home/docker/cp-test_ha-208854_ha-208854-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test_ha-208854_ha-208854-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854:/home/docker/cp-test.txt ha-208854-m03:/home/docker/cp-test_ha-208854_ha-208854-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test_ha-208854_ha-208854-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854:/home/docker/cp-test.txt ha-208854-m04:/home/docker/cp-test_ha-208854_ha-208854-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test_ha-208854_ha-208854-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp testdata/cp-test.txt ha-208854-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile524923838/001/cp-test_ha-208854-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m02:/home/docker/cp-test.txt ha-208854:/home/docker/cp-test_ha-208854-m02_ha-208854.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test_ha-208854-m02_ha-208854.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m02:/home/docker/cp-test.txt ha-208854-m03:/home/docker/cp-test_ha-208854-m02_ha-208854-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test_ha-208854-m02_ha-208854-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m02:/home/docker/cp-test.txt ha-208854-m04:/home/docker/cp-test_ha-208854-m02_ha-208854-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test_ha-208854-m02_ha-208854-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp testdata/cp-test.txt ha-208854-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile524923838/001/cp-test_ha-208854-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m03:/home/docker/cp-test.txt ha-208854:/home/docker/cp-test_ha-208854-m03_ha-208854.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test_ha-208854-m03_ha-208854.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m03:/home/docker/cp-test.txt ha-208854-m02:/home/docker/cp-test_ha-208854-m03_ha-208854-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test_ha-208854-m03_ha-208854-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m03:/home/docker/cp-test.txt ha-208854-m04:/home/docker/cp-test_ha-208854-m03_ha-208854-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test_ha-208854-m03_ha-208854-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp testdata/cp-test.txt ha-208854-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile524923838/001/cp-test_ha-208854-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m04:/home/docker/cp-test.txt ha-208854:/home/docker/cp-test_ha-208854-m04_ha-208854.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854 "sudo cat /home/docker/cp-test_ha-208854-m04_ha-208854.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m04:/home/docker/cp-test.txt ha-208854-m02:/home/docker/cp-test_ha-208854-m04_ha-208854-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m02 "sudo cat /home/docker/cp-test_ha-208854-m04_ha-208854-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 cp ha-208854-m04:/home/docker/cp-test.txt ha-208854-m03:/home/docker/cp-test_ha-208854-m04_ha-208854-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 ssh -n ha-208854-m03 "sudo cat /home/docker/cp-test_ha-208854-m04_ha-208854-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-208854 node stop m02 -v=7 --alsologtostderr: (10.698307422s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr: exit status 7 (664.852379ms)

                                                
                                                
-- stdout --
	ha-208854
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-208854-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-208854-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-208854-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:12:20.894052  863726 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:12:20.894321  863726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:12:20.894331  863726 out.go:358] Setting ErrFile to fd 2...
	I0407 13:12:20.894335  863726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:12:20.894538  863726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 13:12:20.894745  863726 out.go:352] Setting JSON to false
	I0407 13:12:20.894783  863726 mustload.go:65] Loading cluster: ha-208854
	I0407 13:12:20.894880  863726 notify.go:220] Checking for updates...
	I0407 13:12:20.895273  863726 config.go:182] Loaded profile config "ha-208854": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:12:20.895300  863726 status.go:174] checking status of ha-208854 ...
	I0407 13:12:20.895784  863726 cli_runner.go:164] Run: docker container inspect ha-208854 --format={{.State.Status}}
	I0407 13:12:20.914078  863726 status.go:371] ha-208854 host status = "Running" (err=<nil>)
	I0407 13:12:20.914129  863726 host.go:66] Checking if "ha-208854" exists ...
	I0407 13:12:20.914390  863726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-208854
	I0407 13:12:20.932163  863726 host.go:66] Checking if "ha-208854" exists ...
	I0407 13:12:20.932398  863726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:12:20.932457  863726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-208854
	I0407 13:12:20.956494  863726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/ha-208854/id_rsa Username:docker}
	I0407 13:12:21.044764  863726 ssh_runner.go:195] Run: systemctl --version
	I0407 13:12:21.049091  863726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:12:21.060794  863726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:12:21.109597  863726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-04-07 13:12:21.099981903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 13:12:21.110174  863726 kubeconfig.go:125] found "ha-208854" server: "https://192.168.49.254:8443"
	I0407 13:12:21.110209  863726 api_server.go:166] Checking apiserver status ...
	I0407 13:12:21.110253  863726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:12:21.122267  863726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2597/cgroup
	I0407 13:12:21.131408  863726 api_server.go:182] apiserver freezer: "2:freezer:/docker/3c65f43fb4b24cfd6e2cfb2dc74f030cb0f64f80df35649868f37520988771fb/kubepods/burstable/pod4b3b44f626664ebddfd05a63d7334650/3236d68802101782f1ee86a9bda35f270858c5b294287359ea048a255aae4934"
	I0407 13:12:21.131614  863726 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3c65f43fb4b24cfd6e2cfb2dc74f030cb0f64f80df35649868f37520988771fb/kubepods/burstable/pod4b3b44f626664ebddfd05a63d7334650/3236d68802101782f1ee86a9bda35f270858c5b294287359ea048a255aae4934/freezer.state
	I0407 13:12:21.139903  863726 api_server.go:204] freezer state: "THAWED"
	I0407 13:12:21.139937  863726 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 13:12:21.143686  863726 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 13:12:21.143711  863726 status.go:463] ha-208854 apiserver status = Running (err=<nil>)
	I0407 13:12:21.143722  863726 status.go:176] ha-208854 status: &{Name:ha-208854 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:21.143737  863726 status.go:174] checking status of ha-208854-m02 ...
	I0407 13:12:21.143964  863726 cli_runner.go:164] Run: docker container inspect ha-208854-m02 --format={{.State.Status}}
	I0407 13:12:21.164908  863726 status.go:371] ha-208854-m02 host status = "Stopped" (err=<nil>)
	I0407 13:12:21.164935  863726 status.go:384] host is not running, skipping remaining checks
	I0407 13:12:21.164948  863726 status.go:176] ha-208854-m02 status: &{Name:ha-208854-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:21.164967  863726 status.go:174] checking status of ha-208854-m03 ...
	I0407 13:12:21.165236  863726 cli_runner.go:164] Run: docker container inspect ha-208854-m03 --format={{.State.Status}}
	I0407 13:12:21.182878  863726 status.go:371] ha-208854-m03 host status = "Running" (err=<nil>)
	I0407 13:12:21.182917  863726 host.go:66] Checking if "ha-208854-m03" exists ...
	I0407 13:12:21.183213  863726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-208854-m03
	I0407 13:12:21.201692  863726 host.go:66] Checking if "ha-208854-m03" exists ...
	I0407 13:12:21.201964  863726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:12:21.202008  863726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-208854-m03
	I0407 13:12:21.222660  863726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/ha-208854-m03/id_rsa Username:docker}
	I0407 13:12:21.308639  863726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:12:21.319971  863726 kubeconfig.go:125] found "ha-208854" server: "https://192.168.49.254:8443"
	I0407 13:12:21.320001  863726 api_server.go:166] Checking apiserver status ...
	I0407 13:12:21.320037  863726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:12:21.330558  863726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2416/cgroup
	I0407 13:12:21.339488  863726 api_server.go:182] apiserver freezer: "2:freezer:/docker/f0c4af34db9aff818f055fd4df6c9d9bd61485f84a6136a7f8aed67609e3322f/kubepods/burstable/podce5a328df0fc10498aac75ac4d8187e8/f04a07241ce8a1cce0a35acd97bca4c1c6765861dd5098b6656520403be88e76"
	I0407 13:12:21.339565  863726 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f0c4af34db9aff818f055fd4df6c9d9bd61485f84a6136a7f8aed67609e3322f/kubepods/burstable/podce5a328df0fc10498aac75ac4d8187e8/f04a07241ce8a1cce0a35acd97bca4c1c6765861dd5098b6656520403be88e76/freezer.state
	I0407 13:12:21.347882  863726 api_server.go:204] freezer state: "THAWED"
	I0407 13:12:21.347919  863726 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0407 13:12:21.351828  863726 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0407 13:12:21.351852  863726 status.go:463] ha-208854-m03 apiserver status = Running (err=<nil>)
	I0407 13:12:21.351862  863726 status.go:176] ha-208854-m03 status: &{Name:ha-208854-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:12:21.351878  863726 status.go:174] checking status of ha-208854-m04 ...
	I0407 13:12:21.352138  863726 cli_runner.go:164] Run: docker container inspect ha-208854-m04 --format={{.State.Status}}
	I0407 13:12:21.371804  863726 status.go:371] ha-208854-m04 host status = "Running" (err=<nil>)
	I0407 13:12:21.371831  863726 host.go:66] Checking if "ha-208854-m04" exists ...
	I0407 13:12:21.372113  863726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-208854-m04
	I0407 13:12:21.391196  863726 host.go:66] Checking if "ha-208854-m04" exists ...
	I0407 13:12:21.391502  863726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:12:21.391549  863726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-208854-m04
	I0407 13:12:21.409521  863726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/ha-208854-m04/id_rsa Username:docker}
	I0407 13:12:21.496769  863726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:12:21.508009  863726 status.go:176] ha-208854-m04 status: &{Name:ha-208854-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-208854 node start m02 -v=7 --alsologtostderr: (39.711869437s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (233.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-208854 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-208854 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-208854 -v=7 --alsologtostderr: (33.804091808s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-208854 --wait=true -v=7 --alsologtostderr
E0407 13:14:30.666687  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:30.673092  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:30.684574  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:30.705964  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:30.747376  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:30.828837  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:30.990410  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:31.312225  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:31.954280  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:33.236006  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:35.797930  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:40.920398  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:14:51.162329  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:11.644300  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:15:52.606198  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:16:13.250388  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-208854 --wait=true -v=7 --alsologtostderr: (3m19.296268708s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-208854
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (233.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-208854 node delete m03 -v=7 --alsologtostderr: (8.603775344s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 stop -v=7 --alsologtostderr
E0407 13:17:14.528540  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:17:36.316147  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-208854 stop -v=7 --alsologtostderr: (32.419365873s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr: exit status 7 (124.861941ms)

                                                
                                                
-- stdout --
	ha-208854
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-208854-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-208854-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:17:39.516553  896240 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:17:39.516794  896240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:17:39.516802  896240 out.go:358] Setting ErrFile to fd 2...
	I0407 13:17:39.516806  896240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:17:39.516997  896240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 13:17:39.517183  896240 out.go:352] Setting JSON to false
	I0407 13:17:39.517213  896240 mustload.go:65] Loading cluster: ha-208854
	I0407 13:17:39.517309  896240 notify.go:220] Checking for updates...
	I0407 13:17:39.517614  896240 config.go:182] Loaded profile config "ha-208854": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:17:39.517635  896240 status.go:174] checking status of ha-208854 ...
	I0407 13:17:39.518050  896240 cli_runner.go:164] Run: docker container inspect ha-208854 --format={{.State.Status}}
	I0407 13:17:39.539911  896240 status.go:371] ha-208854 host status = "Stopped" (err=<nil>)
	I0407 13:17:39.539956  896240 status.go:384] host is not running, skipping remaining checks
	I0407 13:17:39.539972  896240 status.go:176] ha-208854 status: &{Name:ha-208854 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:17:39.540021  896240 status.go:174] checking status of ha-208854-m02 ...
	I0407 13:17:39.540307  896240 cli_runner.go:164] Run: docker container inspect ha-208854-m02 --format={{.State.Status}}
	I0407 13:17:39.562799  896240 status.go:371] ha-208854-m02 host status = "Stopped" (err=<nil>)
	I0407 13:17:39.562827  896240 status.go:384] host is not running, skipping remaining checks
	I0407 13:17:39.562835  896240 status.go:176] ha-208854-m02 status: &{Name:ha-208854-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:17:39.562862  896240 status.go:174] checking status of ha-208854-m04 ...
	I0407 13:17:39.563224  896240 cli_runner.go:164] Run: docker container inspect ha-208854-m04 --format={{.State.Status}}
	I0407 13:17:39.585365  896240 status.go:371] ha-208854-m04 host status = "Stopped" (err=<nil>)
	I0407 13:17:39.585397  896240 status.go:384] host is not running, skipping remaining checks
	I0407 13:17:39.585406  896240 status.go:176] ha-208854-m04 status: &{Name:ha-208854-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-208854 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-208854 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.797009183s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-208854 --control-plane -v=7 --alsologtostderr
E0407 13:19:30.667008  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-208854 --control-plane -v=7 --alsologtostderr: (34.533373353s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-208854 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (22.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-064765 --driver=docker  --container-runtime=docker
E0407 13:19:58.372646  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-064765 --driver=docker  --container-runtime=docker: (22.891875563s)
--- PASS: TestImageBuild/serial/Setup (22.89s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-064765
--- PASS: TestImageBuild/serial/NormalBuild (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-064765
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-064765
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-064765
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-467107 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0407 13:21:13.251323  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-467107 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m0.486409401s)
--- PASS: TestJSONOutput/start/Command (60.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-467107 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-467107 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-467107 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-467107 --output=json --user=testUser: (10.944595234s)
--- PASS: TestJSONOutput/stop/Command (10.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-751724 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-751724 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.422436ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c688de40-e5d9-46ab-a63c-25eb027b0443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-751724] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"90502a8f-a6f1-4196-97c0-dd133f993552","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"8db598db-8234-461c-8faf-afd54845b9f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"719f49a8-2dd9-4b35-a362-47720d6e63a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig"}}
	{"specversion":"1.0","id":"715db075-1240-47fd-bf86-44d58237cc8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube"}}
	{"specversion":"1.0","id":"547f617e-4df3-4b66-836b-5f40d1aa7465","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e78978e5-6777-4d3e-9384-c112441b3ffa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e38ead91-164a-4024-aacd-0728d1d82bdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-751724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-751724
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-193368 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-193368 --network=: (21.85418475s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-193368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-193368
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-193368: (2.028626487s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.90s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-778410 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-778410 --network=bridge: (22.049929618s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-778410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-778410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-778410: (1.897980591s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.97s)

                                                
                                    
x
+
TestKicExistingNetwork (25.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0407 13:22:15.612675  773373 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0407 13:22:15.629562  773373 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0407 13:22:15.629643  773373 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0407 13:22:15.629660  773373 cli_runner.go:164] Run: docker network inspect existing-network
W0407 13:22:15.646168  773373 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0407 13:22:15.646205  773373 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0407 13:22:15.646224  773373 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0407 13:22:15.646348  773373 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:22:15.663821  773373 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7dbb04e7c0fa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:b2:38:16:86:80} reservation:<nil>}
I0407 13:22:15.664303  773373 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017c30d0}
I0407 13:22:15.664331  773373 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0407 13:22:15.664384  773373 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0407 13:22:15.713841  773373 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-868244 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-868244 --network=existing-network: (23.840376434s)
helpers_test.go:175: Cleaning up "existing-network-868244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-868244
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-868244: (1.965922112s)
I0407 13:22:41.537066  773373 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.94s)

                                                
                                    
x
+
TestKicCustomSubnet (23.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-302734 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-302734 --subnet=192.168.60.0/24: (21.271679857s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-302734 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-302734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-302734
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-302734: (2.08686421s)
--- PASS: TestKicCustomSubnet (23.38s)

                                                
                                    
x
+
TestKicStaticIP (24.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-267572 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-267572 --static-ip=192.168.200.200: (22.501909625s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-267572 ip
helpers_test.go:175: Cleaning up "static-ip-267572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-267572
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-267572: (2.10127195s)
--- PASS: TestKicStaticIP (24.73s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (53.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-839951 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-839951 --driver=docker  --container-runtime=docker: (24.618616915s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-859457 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-859457 --driver=docker  --container-runtime=docker: (23.38083226s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-839951
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-859457
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-859457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-859457
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-859457: (2.101102709s)
helpers_test.go:175: Cleaning up "first-839951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-839951
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-839951: (2.068067714s)
--- PASS: TestMinikubeProfile (53.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-147476 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0407 13:24:30.671645  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-147476 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.349401959s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-147476 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-167709 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-167709 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.688603207s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167709 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-147476 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-147476 --alsologtostderr -v=5: (1.463354387s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167709 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-167709
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-167709: (1.173546761s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.64s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-167709
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-167709: (6.642132406s)
--- PASS: TestMountStart/serial/RestartStopped (7.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-167709 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m12.403815689s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (41.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-990483 -- rollout status deployment/busybox: (2.204961573s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:26:07.321461  773373 retry.go:31] will retry after 1.4541323s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:26:08.890553  773373 retry.go:31] will retry after 1.624590094s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:26:10.633003  773373 retry.go:31] will retry after 2.573155322s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0407 13:26:13.250532  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:26:13.322308  773373 retry.go:31] will retry after 3.836400659s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:26:17.274279  773373 retry.go:31] will retry after 4.003000238s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:26:21.393367  773373 retry.go:31] will retry after 10.414284595s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0407 13:26:31.923747  773373 retry.go:31] will retry after 13.033279453s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-4fdcz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-chmdg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-4fdcz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-chmdg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-4fdcz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-chmdg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (41.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-4fdcz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-4fdcz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-chmdg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-990483 -- exec busybox-58667487b6-chmdg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-990483 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-990483 -v 3 --alsologtostderr: (14.75116373s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-990483 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp testdata/cp-test.txt multinode-990483:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1129134317/001/cp-test_multinode-990483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483:/home/docker/cp-test.txt multinode-990483-m02:/home/docker/cp-test_multinode-990483_multinode-990483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m02 "sudo cat /home/docker/cp-test_multinode-990483_multinode-990483-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483:/home/docker/cp-test.txt multinode-990483-m03:/home/docker/cp-test_multinode-990483_multinode-990483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m03 "sudo cat /home/docker/cp-test_multinode-990483_multinode-990483-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp testdata/cp-test.txt multinode-990483-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1129134317/001/cp-test_multinode-990483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483-m02:/home/docker/cp-test.txt multinode-990483:/home/docker/cp-test_multinode-990483-m02_multinode-990483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483 "sudo cat /home/docker/cp-test_multinode-990483-m02_multinode-990483.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483-m02:/home/docker/cp-test.txt multinode-990483-m03:/home/docker/cp-test_multinode-990483-m02_multinode-990483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m03 "sudo cat /home/docker/cp-test_multinode-990483-m02_multinode-990483-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp testdata/cp-test.txt multinode-990483-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1129134317/001/cp-test_multinode-990483-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483-m03:/home/docker/cp-test.txt multinode-990483:/home/docker/cp-test_multinode-990483-m03_multinode-990483.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483 "sudo cat /home/docker/cp-test_multinode-990483-m03_multinode-990483.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 cp multinode-990483-m03:/home/docker/cp-test.txt multinode-990483-m02:/home/docker/cp-test_multinode-990483-m03_multinode-990483-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 ssh -n multinode-990483-m02 "sudo cat /home/docker/cp-test_multinode-990483-m03_multinode-990483-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-990483 node stop m03: (1.177309357s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990483 status: exit status 7 (468.763926ms)

                                                
                                                
-- stdout --
	multinode-990483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-990483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-990483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr: exit status 7 (481.574963ms)

                                                
                                                
-- stdout --
	multinode-990483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-990483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-990483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:27:13.852489  985967 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:27:13.852639  985967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:27:13.852652  985967 out.go:358] Setting ErrFile to fd 2...
	I0407 13:27:13.852658  985967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:27:13.852964  985967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 13:27:13.853174  985967 out.go:352] Setting JSON to false
	I0407 13:27:13.853218  985967 mustload.go:65] Loading cluster: multinode-990483
	I0407 13:27:13.853333  985967 notify.go:220] Checking for updates...
	I0407 13:27:13.853749  985967 config.go:182] Loaded profile config "multinode-990483": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:27:13.853782  985967 status.go:174] checking status of multinode-990483 ...
	I0407 13:27:13.854243  985967 cli_runner.go:164] Run: docker container inspect multinode-990483 --format={{.State.Status}}
	I0407 13:27:13.873527  985967 status.go:371] multinode-990483 host status = "Running" (err=<nil>)
	I0407 13:27:13.873552  985967 host.go:66] Checking if "multinode-990483" exists ...
	I0407 13:27:13.873825  985967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-990483
	I0407 13:27:13.892995  985967 host.go:66] Checking if "multinode-990483" exists ...
	I0407 13:27:13.893357  985967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:27:13.893415  985967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-990483
	I0407 13:27:13.911476  985967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/multinode-990483/id_rsa Username:docker}
	I0407 13:27:14.000973  985967 ssh_runner.go:195] Run: systemctl --version
	I0407 13:27:14.005142  985967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:27:14.016045  985967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0407 13:27:14.067452  985967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-04-07 13:27:14.057518368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1078-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0407 13:27:14.068220  985967 kubeconfig.go:125] found "multinode-990483" server: "https://192.168.67.2:8443"
	I0407 13:27:14.068266  985967 api_server.go:166] Checking apiserver status ...
	I0407 13:27:14.068313  985967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:27:14.079511  985967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2464/cgroup
	I0407 13:27:14.089173  985967 api_server.go:182] apiserver freezer: "2:freezer:/docker/fd2ba2e544b85d0ee2180b14e0f2c44855595e7e560b0810666138db3a4deed7/kubepods/burstable/pod92fc6b611024dcedf66b33745de63a79/a9abdb30b1fd89bae16385ec3fd60165551f8bd86ec292dd3e926e5e7c67eded"
	I0407 13:27:14.089271  985967 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fd2ba2e544b85d0ee2180b14e0f2c44855595e7e560b0810666138db3a4deed7/kubepods/burstable/pod92fc6b611024dcedf66b33745de63a79/a9abdb30b1fd89bae16385ec3fd60165551f8bd86ec292dd3e926e5e7c67eded/freezer.state
	I0407 13:27:14.097615  985967 api_server.go:204] freezer state: "THAWED"
	I0407 13:27:14.097643  985967 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0407 13:27:14.101521  985967 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0407 13:27:14.101546  985967 status.go:463] multinode-990483 apiserver status = Running (err=<nil>)
	I0407 13:27:14.101557  985967 status.go:176] multinode-990483 status: &{Name:multinode-990483 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:27:14.101573  985967 status.go:174] checking status of multinode-990483-m02 ...
	I0407 13:27:14.101816  985967 cli_runner.go:164] Run: docker container inspect multinode-990483-m02 --format={{.State.Status}}
	I0407 13:27:14.119190  985967 status.go:371] multinode-990483-m02 host status = "Running" (err=<nil>)
	I0407 13:27:14.119222  985967 host.go:66] Checking if "multinode-990483-m02" exists ...
	I0407 13:27:14.119533  985967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-990483-m02
	I0407 13:27:14.137251  985967 host.go:66] Checking if "multinode-990483-m02" exists ...
	I0407 13:27:14.137531  985967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:27:14.137581  985967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-990483-m02
	I0407 13:27:14.155606  985967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20598-766623/.minikube/machines/multinode-990483-m02/id_rsa Username:docker}
	I0407 13:27:14.248656  985967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:27:14.259256  985967 status.go:176] multinode-990483-m02 status: &{Name:multinode-990483-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:27:14.259301  985967 status.go:174] checking status of multinode-990483-m03 ...
	I0407 13:27:14.259632  985967 cli_runner.go:164] Run: docker container inspect multinode-990483-m03 --format={{.State.Status}}
	I0407 13:27:14.276975  985967 status.go:371] multinode-990483-m03 host status = "Stopped" (err=<nil>)
	I0407 13:27:14.276999  985967 status.go:384] host is not running, skipping remaining checks
	I0407 13:27:14.277013  985967 status.go:176] multinode-990483-m03 status: &{Name:multinode-990483-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-990483 node start m03 -v=7 --alsologtostderr: (9.349841051s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-990483
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-990483
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-990483: (22.439276843s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990483 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990483 --wait=true -v=8 --alsologtostderr: (55.007046488s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-990483
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-990483 node delete m03: (4.454086257s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-990483 stop: (21.221440016s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990483 status: exit status 7 (86.510984ms)

                                                
                                                
-- stdout --
	multinode-990483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-990483-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr: exit status 7 (87.145935ms)

                                                
                                                
-- stdout --
	multinode-990483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-990483-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 13:29:08.226967 1001319 out.go:345] Setting OutFile to fd 1 ...
	I0407 13:29:08.227214 1001319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:29:08.227222 1001319 out.go:358] Setting ErrFile to fd 2...
	I0407 13:29:08.227226 1001319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:29:08.227413 1001319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-766623/.minikube/bin
	I0407 13:29:08.227590 1001319 out.go:352] Setting JSON to false
	I0407 13:29:08.227628 1001319 mustload.go:65] Loading cluster: multinode-990483
	I0407 13:29:08.227770 1001319 notify.go:220] Checking for updates...
	I0407 13:29:08.228328 1001319 config.go:182] Loaded profile config "multinode-990483": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:29:08.228367 1001319 status.go:174] checking status of multinode-990483 ...
	I0407 13:29:08.229622 1001319 cli_runner.go:164] Run: docker container inspect multinode-990483 --format={{.State.Status}}
	I0407 13:29:08.247854 1001319 status.go:371] multinode-990483 host status = "Stopped" (err=<nil>)
	I0407 13:29:08.247895 1001319 status.go:384] host is not running, skipping remaining checks
	I0407 13:29:08.247903 1001319 status.go:176] multinode-990483 status: &{Name:multinode-990483 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:29:08.247945 1001319 status.go:174] checking status of multinode-990483-m02 ...
	I0407 13:29:08.248259 1001319 cli_runner.go:164] Run: docker container inspect multinode-990483-m02 --format={{.State.Status}}
	I0407 13:29:08.266232 1001319 status.go:371] multinode-990483-m02 host status = "Stopped" (err=<nil>)
	I0407 13:29:08.266257 1001319 status.go:384] host is not running, skipping remaining checks
	I0407 13:29:08.266267 1001319 status.go:176] multinode-990483-m02 status: &{Name:multinode-990483-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990483 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0407 13:29:30.666114  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990483 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (53.307362097s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-990483 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-990483
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990483-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-990483-m02 --driver=docker  --container-runtime=docker: exit status 14 (69.052201ms)

                                                
                                                
-- stdout --
	* [multinode-990483-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-990483-m02' is duplicated with machine name 'multinode-990483-m02' in profile 'multinode-990483'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-990483-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-990483-m03 --driver=docker  --container-runtime=docker: (21.224131376s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-990483
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-990483: exit status 80 (282.668407ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-990483 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-990483-m03 already exists in multinode-990483-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-990483-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-990483-m03: (2.12924834s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.76s)

                                                
                                    
x
+
TestPreload (93.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-149475 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0407 13:30:53.734456  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:31:13.250997  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-149475 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (55.657949923s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-149475 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-149475 image pull gcr.io/k8s-minikube/busybox: (1.402937397s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-149475
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-149475: (10.769940985s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-149475 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-149475 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (22.878635744s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-149475 image list
helpers_test.go:175: Cleaning up "test-preload-149475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-149475
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-149475: (2.157197844s)
--- PASS: TestPreload (93.07s)

                                                
                                    
x
+
TestScheduledStopUnix (94.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-519726 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-519726 --memory=2048 --driver=docker  --container-runtime=docker: (21.788696886s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519726 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-519726 -n scheduled-stop-519726
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519726 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0407 13:32:25.126860  773373 retry.go:31] will retry after 86.742µs: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.128023  773373 retry.go:31] will retry after 183.363µs: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.129218  773373 retry.go:31] will retry after 132.561µs: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.130373  773373 retry.go:31] will retry after 504.535µs: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.131522  773373 retry.go:31] will retry after 738.482µs: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.132674  773373 retry.go:31] will retry after 1.095326ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.133846  773373 retry.go:31] will retry after 1.33484ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.136080  773373 retry.go:31] will retry after 1.231637ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.138290  773373 retry.go:31] will retry after 3.393301ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.142539  773373 retry.go:31] will retry after 3.477165ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.146739  773373 retry.go:31] will retry after 3.685347ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.150963  773373 retry.go:31] will retry after 9.404624ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.161216  773373 retry.go:31] will retry after 17.798206ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.179509  773373 retry.go:31] will retry after 27.041232ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
I0407 13:32:25.206739  773373 retry.go:31] will retry after 34.967275ms: open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/scheduled-stop-519726/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519726 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-519726 -n scheduled-stop-519726
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-519726
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-519726 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-519726
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-519726: exit status 7 (73.826152ms)

                                                
                                                
-- stdout --
	scheduled-stop-519726
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-519726 -n scheduled-stop-519726
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-519726 -n scheduled-stop-519726: exit status 7 (68.362337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-519726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-519726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-519726: (1.653238145s)
--- PASS: TestScheduledStopUnix (94.80s)

                                                
                                    
x
+
TestSkaffold (97.85s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4146269795 version
skaffold_test.go:63: skaffold version: v2.15.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-296125 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-296125 --memory=2600 --driver=docker  --container-runtime=docker: (21.908661983s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4146269795 run --minikube-profile skaffold-296125 --kube-context skaffold-296125 --status-check=true --port-forward=false --interactive=false
E0407 13:34:16.319609  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:34:30.671612  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4146269795 run --minikube-profile skaffold-296125 --kube-context skaffold-296125 --status-check=true --port-forward=false --interactive=false: (1m1.461843859s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-744cf65554-865sv" [06d1d7da-b2f9-46a3-8db3-5279c5a3fb1d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003318022s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-c66c8f4db-tk46q" [27295b2b-2489-446e-bdf0-27060c29bb8e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003367566s
helpers_test.go:175: Cleaning up "skaffold-296125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-296125
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-296125: (2.780720039s)
--- PASS: TestSkaffold (97.85s)

                                                
                                    
x
+
TestInsufficientStorage (12.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-436979 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-436979 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.597550308s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a4341eb-fc4f-4573-b017-1e8ca8dd7004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-436979] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4a81a3a-28f5-4bf8-b3a6-677d343cc759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"51430760-7166-42fd-b314-1184efd14808","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"88c4a1e0-81ec-432a-ab75-2bf2a71f4add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig"}}
	{"specversion":"1.0","id":"3dc9588f-1afb-4a67-8871-13986f68379e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube"}}
	{"specversion":"1.0","id":"db722c31-fefe-4572-9f22-b038a637d0a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5288e753-74f8-4ff3-8c6c-17f5f223e953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e53cdf91-9276-4cba-b8c2-68c2bb0d818d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7e1913d4-cd82-454c-aecb-73ddedefcbee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7c5e47f6-0309-47c9-8700-e677ca05009e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1fb8da7-f735-4d66-85c7-2961e9320e45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a10cf0bc-ccb6-43a2-80ae-dca2e8687407","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-436979\" primary control-plane node in \"insufficient-storage-436979\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ed86dc7-5acd-4288-91cd-15d55d1542a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1743675393-20591 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b51a1d0e-e869-4255-9b3f-d83efcb15d46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1476e2bc-12a8-4798-9bfa-0914fc34f925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-436979 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-436979 --output=json --layout=cluster: exit status 7 (265.460247ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-436979","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-436979","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:35:26.431405 1042499 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-436979" does not appear in /home/jenkins/minikube-integration/20598-766623/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-436979 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-436979 --output=json --layout=cluster: exit status 7 (262.310435ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-436979","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-436979","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 13:35:26.696002 1042600 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-436979" does not appear in /home/jenkins/minikube-integration/20598-766623/kubeconfig
	E0407 13:35:26.706025 1042600 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/insufficient-storage-436979/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-436979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-436979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-436979: (1.67179013s)
--- PASS: TestInsufficientStorage (12.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1262979217 start -p running-upgrade-626288 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1262979217 start -p running-upgrade-626288 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.315673834s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-626288 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-626288 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.286292309s)
helpers_test.go:175: Cleaning up "running-upgrade-626288" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-626288
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-626288: (2.417366674s)
--- PASS: TestRunningBinaryUpgrade (86.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (330.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.023693403s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-512543
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-512543: (2.756390257s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-512543 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-512543 status --format={{.Host}}: exit status 7 (82.866638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.111714887s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-512543 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (70.190485ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-512543] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-512543
	    minikube start -p kubernetes-upgrade-512543 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5125432 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-512543 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-512543 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.494418986s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-512543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-512543
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-512543: (2.349863845s)
--- PASS: TestKubernetesUpgrade (330.95s)

                                                
                                    
x
+
TestMissingContainerUpgrade (132.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.797799399 start -p missing-upgrade-054987 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.797799399 start -p missing-upgrade-054987 --memory=2200 --driver=docker  --container-runtime=docker: (1m8.391743249s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-054987
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-054987: (10.541168336s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-054987
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-054987 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-054987 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.585930994s)
helpers_test.go:175: Cleaning up "missing-upgrade-054987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-054987
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-054987: (2.213132635s)
--- PASS: TestMissingContainerUpgrade (132.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821947 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-821947 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (90.624318ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-821947] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20598-766623/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-766623/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821947 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821947 --driver=docker  --container-runtime=docker: (35.473844196s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-821947 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821947 --no-kubernetes --driver=docker  --container-runtime=docker
E0407 13:36:13.251195  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821947 --no-kubernetes --driver=docker  --container-runtime=docker: (14.637940107s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-821947 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-821947 status -o json: exit status 2 (297.830647ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-821947","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-821947
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-821947: (1.771387078s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821947 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821947 --no-kubernetes --driver=docker  --container-runtime=docker: (8.49572675s)
--- PASS: TestNoKubernetes/serial/Start (8.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-821947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-821947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (294.844276ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.469176583s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-821947
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-821947: (4.0777535s)
--- PASS: TestNoKubernetes/serial/Stop (4.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-821947 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-821947 --driver=docker  --container-runtime=docker: (6.905118839s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-821947 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-821947 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.228517ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (71.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2851428490 start -p stopped-upgrade-579155 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2851428490 start -p stopped-upgrade-579155 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.647891748s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2851428490 -p stopped-upgrade-579155 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2851428490 -p stopped-upgrade-579155 stop: (10.821875184s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-579155 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-579155 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.194082172s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (71.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-579155
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-579155: (1.390188401s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestPause/serial/Start (64.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-487840 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-487840 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m4.683224185s)
--- PASS: TestPause/serial/Start (64.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-487840 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-487840 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.003308036s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (31.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0407 13:39:30.666925  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (31.124934664s)
--- PASS: TestNetworkPlugins/group/auto/Start (31.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-922548 "pgrep -a kubelet"
I0407 13:39:42.423769  773373 config.go:182] Loaded profile config "auto-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-c8k9r" [c61d8d1d-b2ef-4847-b547-f8acf3fb7fd6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-c8k9r" [c61d8d1d-b2ef-4847-b547-f8acf3fb7fd6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003873749s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-487840 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-487840 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-487840 --output=json --layout=cluster: exit status 2 (323.25314ms)

                                                
                                                
-- stdout --
	{"Name":"pause-487840","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-487840","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-487840 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-487840 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-487840 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-487840 --alsologtostderr -v=5: (2.134846999s)
--- PASS: TestPause/serial/DeletePaused (2.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-487840
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-487840: exit status 1 (18.584061ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-487840: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (56.363690681s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (21.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-922548 exec deployment/netcat -- nslookup kubernetes.default
E0407 13:40:01.779260  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:01.785656  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:01.797083  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:01.818598  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:01.860069  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:01.941507  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:02.103102  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:02.424670  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:03.066982  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:40:04.349259  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-922548 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.160326927s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 13:40:06.804797  773373 retry.go:31] will retry after 1.03670185s: exit status 1
E0407 13:40:06.910959  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Run:  kubectl --context auto-922548 exec deployment/netcat -- nslookup kubernetes.default
E0407 13:40:12.032405  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context auto-922548 exec deployment/netcat -- nslookup kubernetes.default: (5.155816257s)
--- PASS: TestNetworkPlugins/group/auto/DNS (21.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0407 13:40:42.755788  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m0.972724684s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nlkgx" [7383b0d9-c0a2-4464-9fa6-7c78cfd92699] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003815498s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-922548 "pgrep -a kubelet"
I0407 13:40:51.380356  773373 config.go:182] Loaded profile config "kindnet-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-52qbv" [818c11ff-a142-49b8-8963-38c5a71b7b8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-52qbv" [818c11ff-a142-49b8-8963-38c5a71b7b8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004829583s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0407 13:41:23.717213  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (49.116456707s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-mj7mb" [f79490f3-5a37-4d26-bcf8-9d8d82d65a5e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003953975s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-922548 "pgrep -a kubelet"
I0407 13:41:39.739368  773373 config.go:182] Loaded profile config "calico-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bhp98" [b1f7b9a5-bc19-4b16-a4df-a583d778a8f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bhp98" [b1f7b9a5-bc19-4b16-a4df-a583d778a8f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004820471s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (71.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m11.115022439s)
--- PASS: TestNetworkPlugins/group/false/Start (71.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (46.679569992s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-922548 "pgrep -a kubelet"
I0407 13:42:11.643421  773373 config.go:182] Loaded profile config "custom-flannel-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-c2jsh" [fe821340-53d5-4278-8a7b-e17bd58f4af9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-c2jsh" [fe821340-53d5-4278-8a7b-e17bd58f4af9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004292171s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (40.388811244s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (62.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0407 13:42:45.639413  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m2.492985707s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (62.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-922548 "pgrep -a kubelet"
I0407 13:42:54.895752  773373 config.go:182] Loaded profile config "bridge-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z7fpr" [db7403fd-bb3f-42d1-be89-66e5e25cedf2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z7fpr" [db7403fd-bb3f-42d1-be89-66e5e25cedf2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00442189s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ljgc9" [e53005c0-0261-47db-be07-37379132e4d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003611765s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-922548 "pgrep -a kubelet"
I0407 13:43:01.904739  773373 config.go:182] Loaded profile config "flannel-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-g4hbp" [d49544a8-0095-4aae-9851-d8214efb218e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-g4hbp" [d49544a8-0095-4aae-9851-d8214efb218e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.008395862s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-922548 "pgrep -a kubelet"
I0407 13:43:04.010788  773373 config.go:182] Loaded profile config "false-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-x4dlm" [b481155a-3808-413e-8822-60f90c602e6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-x4dlm" [b481155a-3808-413e-8822-60f90c602e6d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003375363s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-922548 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (43.367775307s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-956749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-956749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m13.311316774s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (83.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-241486 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-241486 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m23.444914371s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (83.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-922548 "pgrep -a kubelet"
I0407 13:43:46.398036  773373 config.go:182] Loaded profile config "kubenet-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zm4lx" [cb22903f-b953-4777-895c-ef1126ace180] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zm4lx" [cb22903f-b953-4777-895c-ef1126ace180] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004482753s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-922548 "pgrep -a kubelet"
I0407 13:44:07.626087  773373 config.go:182] Loaded profile config "enable-default-cni-922548": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-922548 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wvw5m" [8de17580-326d-4e70-83d3-e3651bd36336] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wvw5m" [8de17580-326d-4e70-83d3-e3651bd36336] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004750466s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-291859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-291859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m7.964413958s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-922548 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-922548 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)
E0407 13:49:42.631831  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-800546 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:44:42.631247  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:42.637606  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:42.649070  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:42.670455  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:42.711922  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:42.793458  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:42.955415  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:43.277114  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:43.919127  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:45.200690  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:47.762939  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:44:52.884862  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-800546 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m8.339252746s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-241486 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8375844-086a-42dc-b929-20b1ac5b2f77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0407 13:45:01.779088  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:45:03.126143  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f8375844-086a-42dc-b929-20b1ac5b2f77] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003434789s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-241486 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-241486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-241486 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-241486 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-241486 --alsologtostderr -v=3: (10.892150474s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-241486 -n no-preload-241486
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-241486 -n no-preload-241486: exit status 7 (69.54313ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-241486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-241486 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:45:23.607652  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-241486 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m22.952459147s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-241486 -n no-preload-241486
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-291859 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [093be56f-4d0e-4488-a2f1-4d7f1f276ae7] Pending
helpers_test.go:344: "busybox" [093be56f-4d0e-4488-a2f1-4d7f1f276ae7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [093be56f-4d0e-4488-a2f1-4d7f1f276ae7] Running
E0407 13:45:29.482067  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/skaffold-296125/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00344824s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-291859 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-291859 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-291859 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-291859 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-291859 --alsologtostderr -v=3: (10.768800717s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291859 -n embed-certs-291859
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291859 -n embed-certs-291859: exit status 7 (133.600105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-291859 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (263.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-291859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-291859 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m23.28983336s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291859 -n embed-certs-291859
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (263.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-956749 create -f testdata/busybox.yaml
E0407 13:45:45.122998  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:45:45.129385  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:45:45.141440  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0407 13:45:45.163768  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [78b44237-67da-4e1c-bb21-091cf71493b8] Pending
E0407 13:45:45.205209  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:45:45.286747  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:45:45.448260  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:45:45.769873  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [78b44237-67da-4e1c-bb21-091cf71493b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [78b44237-67da-4e1c-bb21-091cf71493b8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004040074s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-956749 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800546 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b4c2656-5802-44ba-a4b6-b0dd538da990] Pending
E0407 13:45:46.411231  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2b4c2656-5802-44ba-a4b6-b0dd538da990] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0407 13:45:47.693323  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2b4c2656-5802-44ba-a4b6-b0dd538da990] Running
E0407 13:45:50.254678  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003832881s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-800546 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-800546 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-800546 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-800546 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-800546 --alsologtostderr -v=3: (11.490512731s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-956749 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-956749 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-956749 --alsologtostderr -v=3
E0407 13:45:55.376420  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:04.569475  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:05.617974  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-956749 --alsologtostderr -v=3: (11.015719651s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546: exit status 7 (103.218797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-800546 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-800546 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-800546 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m26.28259647s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956749 -n old-k8s-version-956749
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956749 -n old-k8s-version-956749: exit status 7 (90.946817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-956749 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (137.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-956749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0407 13:46:13.250640  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/addons-662808/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:26.099752  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:33.453011  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:33.459397  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:33.470870  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:33.492314  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:33.533766  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:33.615283  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:33.776948  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:34.098701  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:34.740163  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:36.022061  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:38.584372  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:43.705942  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:46:53.948310  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:07.061725  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:11.854811  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:11.861254  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:11.872661  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:11.894129  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:11.935549  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:12.016994  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:12.178552  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:12.500290  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:13.142193  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:14.423823  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:14.430218  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:16.985887  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:22.107911  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:26.490807  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:32.349619  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:33.736108  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:52.831550  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.084485  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.090978  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.102454  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.124140  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.165720  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.247205  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.391746  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.409136  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.612755  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.619128  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.630612  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.652778  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.694250  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.730782  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.776242  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:55.937823  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:56.260049  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:56.372651  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:56.901791  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:57.654178  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:47:58.183644  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:00.215638  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:00.745302  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.263950  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.270310  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.281730  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.303217  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.344646  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.426068  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.587672  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:04.909404  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:05.337732  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:05.551209  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:05.867610  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:06.832931  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:09.394394  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:14.516736  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:15.579408  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:16.109123  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-956749 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m17.269186612s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-956749 -n old-k8s-version-956749
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (137.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tctdq" [853232cd-eaef-4fc7-908d-2b51a5c77a99] Running
E0407 13:48:24.758450  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:28.983886  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003564184s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tctdq" [853232cd-eaef-4fc7-908d-2b51a5c77a99] Running
E0407 13:48:33.793195  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004083125s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-956749 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-956749 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-956749 --alsologtostderr -v=1
E0407 13:48:36.061679  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956749 -n old-k8s-version-956749
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956749 -n old-k8s-version-956749: exit status 2 (298.531221ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-956749 -n old-k8s-version-956749
E0407 13:48:36.590965  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-956749 -n old-k8s-version-956749: exit status 2 (295.082461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-956749 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-956749 -n old-k8s-version-956749
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-956749 -n old-k8s-version-956749
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-336976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:48:45.240317  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:46.583948  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:46.590372  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:46.601754  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:46.623035  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:46.665307  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:46.746727  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:46.908412  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:47.230861  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:47.873190  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:49.155133  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:51.716707  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:48:56.838932  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:07.080300  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:07.848352  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:07.854725  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:07.866133  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:07.887594  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:07.929047  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:08.011048  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:08.173307  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:08.494993  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:09.136407  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-336976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (28.88232592s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-336976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-336976 --alsologtostderr -v=3
E0407 13:49:10.418473  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:12.980042  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:17.023346  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:17.313498  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/calico-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:17.553205  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:18.102398  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-336976 --alsologtostderr -v=3: (10.819068633s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336976 -n newest-cni-336976
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336976 -n newest-cni-336976: exit status 7 (150.104529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-336976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-336976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0407 13:49:26.202231  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/false-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:27.562429  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:28.344120  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:49:30.666664  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/functional-880043/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-336976 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (13.936963336s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-336976 -n newest-cni-336976
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-336976 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-336976 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336976 -n newest-cni-336976
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336976 -n newest-cni-336976: exit status 2 (296.886119ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-336976 -n newest-cni-336976
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-336976 -n newest-cni-336976: exit status 2 (295.920923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-336976 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-336976 -n newest-cni-336976
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-336976 -n newest-cni-336976
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-gqbld" [ddc7e423-b105-49c6-89f7-1b980dfcd8a9] Running
E0407 13:49:48.826323  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/enable-default-cni-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004040794s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-gqbld" [ddc7e423-b105-49c6-89f7-1b980dfcd8a9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003662778s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-241486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
E0407 13:49:55.714434  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/custom-flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-241486 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-241486 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-241486 -n no-preload-241486
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-241486 -n no-preload-241486: exit status 2 (290.513483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-241486 -n no-preload-241486
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-241486 -n no-preload-241486: exit status 2 (290.701068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-241486 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-241486 -n no-preload-241486
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-241486 -n no-preload-241486
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tlzbc" [8ee2dd32-c3f4-4f3e-89de-a3a5115e5e09] Running
E0407 13:50:08.523834  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kubenet-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:10.332548  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/auto-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004027226s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-tlzbc" [8ee2dd32-c3f4-4f3e-89de-a3a5115e5e09] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003086319s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-291859 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-291859 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-291859 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291859 -n embed-certs-291859
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291859 -n embed-certs-291859: exit status 2 (295.290978ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-291859 -n embed-certs-291859
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-291859 -n embed-certs-291859: exit status 2 (294.612186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-291859 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291859 -n embed-certs-291859
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-291859 -n embed-certs-291859
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hlnkw" [3456c44d-c6b8-4de9-bcd5-66d5848b540c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003641776s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-hlnkw" [3456c44d-c6b8-4de9-bcd5-66d5848b540c] Running
E0407 13:50:38.945249  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/bridge-922548/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:39.475200  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/flannel-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003831241s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-800546 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-800546 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-800546 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546: exit status 2 (288.206186ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546
E0407 13:50:45.122968  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/kindnet-922548/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546: exit status 2 (289.614603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-800546 --alsologtostderr -v=1
E0407 13:50:45.163306  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:45.169683  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:45.181106  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:45.202709  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:45.244731  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:45.326935  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
E0407 13:50:45.489353  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546
E0407 13:50:45.810786  773373 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/old-k8s-version-956749/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-800546 -n default-k8s-diff-port-800546
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.37s)

                                                
                                    

Test skip (22/345)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:631: 
----------------------- debugLogs start: cilium-922548 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-922548" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:36:34 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-054987
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20598-766623/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-803744
contexts:
- context:
cluster: missing-upgrade-054987
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:36:34 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-054987
name: missing-upgrade-054987
- context:
cluster: offline-docker-803744
extensions:
- extension:
last-update: Mon, 07 Apr 2025 13:36:11 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: offline-docker-803744
name: offline-docker-803744
current-context: missing-upgrade-054987
kind: Config
preferences: {}
users:
- name: missing-upgrade-054987
user:
client-certificate: /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/missing-upgrade-054987/client.crt
client-key: /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/missing-upgrade-054987/client.key
- name: offline-docker-803744
user:
client-certificate: /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/offline-docker-803744/client.crt
client-key: /home/jenkins/minikube-integration/20598-766623/.minikube/profiles/offline-docker-803744/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-922548

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-922548" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-922548"

                                                
                                                
----------------------- debugLogs end: cilium-922548 [took: 4.054493802s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-922548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-922548
--- SKIP: TestNetworkPlugins/group/cilium (4.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-903869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-903869
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard