Test Report: Docker_Linux_containerd 20390

                    
                      1f24ff7f1f35c751c6a992fe7f61f220cc357745:2025-02-10:38293
                    
                

Test fail (11/331)

x
+
TestAddons/parallel/LocalPath (188.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-444927 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-444927 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [99e0a41e-dea7-4fc3-a083-fa0680179d33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:901: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:901: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-444927 -n addons-444927
addons_test.go:901: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-02-10 12:39:01.99796146 +0000 UTC m=+401.155128614
addons_test.go:901: (dbg) Run:  kubectl --context addons-444927 describe po test-local-path -n default
addons_test.go:901: (dbg) kubectl --context addons-444927 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-444927/192.168.49.2
Start Time:       Mon, 10 Feb 2025 12:36:01 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.32
IPs:
IP:  10.244.0.32
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvtsj (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-qvtsj:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m1s                 default-scheduler  Successfully assigned default/test-local-path to addons-444927
Warning  Failed     2m59s                kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:afa67e3cea50ce204060a6c0113bd63cb289cc0f555d5a80a3bb7c0f62b95add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    84s (x4 over 3m)     kubelet            Pulling image "busybox:stable"
Warning  Failed     83s (x4 over 2m59s)  kubelet            Error: ErrImagePull
Warning  Failed     83s (x3 over 2m44s)  kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    4s (x11 over 2m59s)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     4s (x11 over 2m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:901: (dbg) Run:  kubectl --context addons-444927 logs test-local-path -n default
addons_test.go:901: (dbg) Non-zero exit: kubectl --context addons-444927 logs test-local-path -n default: exit status 1 (66.292909ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:901: kubectl --context addons-444927 logs test-local-path -n default: exit status 1
addons_test.go:902: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-444927
helpers_test.go:235: (dbg) docker inspect addons-444927:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808",
	        "Created": "2025-02-10T12:32:58.523536679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 80410,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-10T12:32:58.635075213Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/hostname",
	        "HostsPath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/hosts",
	        "LogPath": "/var/lib/docker/containers/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808/0392e5be00559452610f48922afb15fc3bf3718238ee5a2750b0b0817e743808-json.log",
	        "Name": "/addons-444927",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-444927:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-444927",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534-init/diff:/var/lib/docker/overlay2/9ffca27f7ebed742e3d0dd8f2061c1044c6b8fc8f60ace2c8ab1f353604acf23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534/merged",
	                "UpperDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534/diff",
	                "WorkDir": "/var/lib/docker/overlay2/83d95d0b735939a4a29b22a04372394207a6bc27b80ad4a75ca151335b6b2534/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-444927",
	                "Source": "/var/lib/docker/volumes/addons-444927/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-444927",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-444927",
	                "name.minikube.sigs.k8s.io": "addons-444927",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "142168837ddfcc616a83ce727af2935e56e87646539fb573615a064675a21b43",
	            "SandboxKey": "/var/run/docker/netns/142168837ddf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32773"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32774"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32777"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32775"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32776"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-444927": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f0cf34e07a0770fabc43057d7e82ad303370397ce69358204b84f4691cfe4d51",
	                    "EndpointID": "2a89d3bc55336ebfc48bd151fb50618036c258700d99dd31d15c282858ae35a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-444927",
	                        "0392e5be0055"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-444927 -n addons-444927
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 logs -n 25: (1.124050718s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| delete  | -p download-only-867318              | download-only-867318   | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| delete  | -p download-only-424031              | download-only-424031   | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| delete  | -p download-only-867318              | download-only-867318   | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| start   | --download-only -p                   | download-docker-433372 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC |                     |
	|         | download-docker-433372               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-433372            | download-docker-433372 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| start   | --download-only -p                   | binary-mirror-655095   | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC |                     |
	|         | binary-mirror-655095                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37591               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-655095              | binary-mirror-655095   | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| addons  | enable dashboard -p                  | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC |                     |
	|         | addons-444927                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC |                     |
	|         | addons-444927                        |                        |         |         |                     |                     |
	| start   | -p addons-444927 --wait=true         | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-444927 addons disable         | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-444927 addons disable         | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|         | -p addons-444927                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-444927 addons                 | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-444927 addons                 | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-444927 addons disable         | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:36 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-444927 addons                 | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|         | disable cloud-spanner                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-444927 ip                     | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	| addons  | addons-444927 addons disable         | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-444927 addons                 | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-444927 addons disable         | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-444927 addons disable         | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-444927 addons                 | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:37 UTC | 10 Feb 25 12:37 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-444927 addons                 | addons-444927          | jenkins | v1.35.0 | 10 Feb 25 12:37 UTC | 10 Feb 25 12:37 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:32:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:32:34.668595   79643 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:32:34.668700   79643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:32:34.668712   79643 out.go:358] Setting ErrFile to fd 2...
	I0210 12:32:34.668718   79643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:32:34.668934   79643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:32:34.669556   79643 out.go:352] Setting JSON to false
	I0210 12:32:34.670366   79643 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11704,"bootTime":1739179051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:32:34.670466   79643 start.go:139] virtualization: kvm guest
	I0210 12:32:34.672668   79643 out.go:177] * [addons-444927] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:32:34.673970   79643 notify.go:220] Checking for updates...
	I0210 12:32:34.673991   79643 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:32:34.675559   79643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:32:34.676919   79643 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:32:34.678284   79643 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:32:34.679831   79643 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:32:34.681067   79643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:32:34.682430   79643 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:32:34.703533   79643 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:32:34.703622   79643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:32:34.749138   79643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-10 12:32:34.739909471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:32:34.749238   79643 docker.go:318] overlay module found
	I0210 12:32:34.751154   79643 out.go:177] * Using the docker driver based on user configuration
	I0210 12:32:34.752616   79643 start.go:297] selected driver: docker
	I0210 12:32:34.752634   79643 start.go:901] validating driver "docker" against <nil>
	I0210 12:32:34.752646   79643 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:32:34.753453   79643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:32:34.797223   79643 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-10 12:32:34.788905295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:32:34.797390   79643 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:32:34.797613   79643 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:32:34.799191   79643 out.go:177] * Using Docker driver with root privileges
	I0210 12:32:34.800544   79643 cni.go:84] Creating CNI manager for ""
	I0210 12:32:34.800625   79643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 12:32:34.800640   79643 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 12:32:34.800720   79643 start.go:340] cluster config:
	{Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I0210 12:32:34.802060   79643 out.go:177] * Starting "addons-444927" primary control-plane node in "addons-444927" cluster
	I0210 12:32:34.803239   79643 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0210 12:32:34.804545   79643 out.go:177] * Pulling base image v0.0.46 ...
	I0210 12:32:34.805740   79643 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 12:32:34.805798   79643 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0210 12:32:34.805811   79643 cache.go:56] Caching tarball of preloaded images
	I0210 12:32:34.805838   79643 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0210 12:32:34.805904   79643 preload.go:172] Found /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:32:34.805919   79643 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0210 12:32:34.806248   79643 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/config.json ...
	I0210 12:32:34.806277   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/config.json: {Name:mk0dcd327ca51df60d1e98951b839a50c380ada6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:32:34.821959   79643 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0210 12:32:34.822098   79643 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0210 12:32:34.822114   79643 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory, skipping pull
	I0210 12:32:34.822119   79643 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in cache, skipping pull
	I0210 12:32:34.822125   79643 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0210 12:32:34.822133   79643 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from local cache
	I0210 12:32:46.371979   79643 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 from cached tarball
	I0210 12:32:46.372030   79643 cache.go:230] Successfully downloaded all kic artifacts
	I0210 12:32:46.372071   79643 start.go:360] acquireMachinesLock for addons-444927: {Name:mke3114138a91c8004073314acab4a7dffe2d711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:32:46.372184   79643 start.go:364] duration metric: took 86.427µs to acquireMachinesLock for "addons-444927"
	I0210 12:32:46.372213   79643 start.go:93] Provisioning new machine with config: &{Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0210 12:32:46.372298   79643 start.go:125] createHost starting for "" (driver="docker")
	I0210 12:32:46.374333   79643 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0210 12:32:46.374564   79643 start.go:159] libmachine.API.Create for "addons-444927" (driver="docker")
	I0210 12:32:46.374599   79643 client.go:168] LocalClient.Create starting
	I0210 12:32:46.374709   79643 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem
	I0210 12:32:46.670023   79643 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem
	I0210 12:32:46.816347   79643 cli_runner.go:164] Run: docker network inspect addons-444927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0210 12:32:46.832492   79643 cli_runner.go:211] docker network inspect addons-444927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0210 12:32:46.832571   79643 network_create.go:284] running [docker network inspect addons-444927] to gather additional debugging logs...
	I0210 12:32:46.832595   79643 cli_runner.go:164] Run: docker network inspect addons-444927
	W0210 12:32:46.848507   79643 cli_runner.go:211] docker network inspect addons-444927 returned with exit code 1
	I0210 12:32:46.848542   79643 network_create.go:287] error running [docker network inspect addons-444927]: docker network inspect addons-444927: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-444927 not found
	I0210 12:32:46.848555   79643 network_create.go:289] output of [docker network inspect addons-444927]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-444927 not found
	
	** /stderr **
	I0210 12:32:46.848647   79643 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0210 12:32:46.865047   79643 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016fc7a0}
	I0210 12:32:46.865092   79643 network_create.go:124] attempt to create docker network addons-444927 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0210 12:32:46.865151   79643 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-444927 addons-444927
	I0210 12:32:46.925267   79643 network_create.go:108] docker network addons-444927 192.168.49.0/24 created
	I0210 12:32:46.925301   79643 kic.go:121] calculated static IP "192.168.49.2" for the "addons-444927" container
	I0210 12:32:46.925367   79643 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0210 12:32:46.941561   79643 cli_runner.go:164] Run: docker volume create addons-444927 --label name.minikube.sigs.k8s.io=addons-444927 --label created_by.minikube.sigs.k8s.io=true
	I0210 12:32:46.959106   79643 oci.go:103] Successfully created a docker volume addons-444927
	I0210 12:32:46.959194   79643 cli_runner.go:164] Run: docker run --rm --name addons-444927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444927 --entrypoint /usr/bin/test -v addons-444927:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0210 12:32:54.002619   79643 cli_runner.go:217] Completed: docker run --rm --name addons-444927-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444927 --entrypoint /usr/bin/test -v addons-444927:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib: (7.043375908s)
	I0210 12:32:54.002649   79643 oci.go:107] Successfully prepared a docker volume addons-444927
	I0210 12:32:54.002669   79643 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 12:32:54.002691   79643 kic.go:194] Starting extracting preloaded images to volume ...
	I0210 12:32:54.002751   79643 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-444927:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0210 12:32:58.463230   79643 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-444927:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.460438996s)
	I0210 12:32:58.463262   79643 kic.go:203] duration metric: took 4.460568319s to extract preloaded images to volume ...
	W0210 12:32:58.463401   79643 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0210 12:32:58.463509   79643 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0210 12:32:58.509019   79643 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-444927 --name addons-444927 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-444927 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-444927 --network addons-444927 --ip 192.168.49.2 --volume addons-444927:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0210 12:32:58.807006   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Running}}
	I0210 12:32:58.825650   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:32:58.843974   79643 cli_runner.go:164] Run: docker exec addons-444927 stat /var/lib/dpkg/alternatives/iptables
	I0210 12:32:58.885985   79643 oci.go:144] the created container "addons-444927" has a running status.
	I0210 12:32:58.886014   79643 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa...
	I0210 12:32:59.099507   79643 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0210 12:32:59.124199   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:32:59.145966   79643 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0210 12:32:59.145986   79643 kic_runner.go:114] Args: [docker exec --privileged addons-444927 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0210 12:32:59.195806   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:32:59.215157   79643 machine.go:93] provisionDockerMachine start ...
	I0210 12:32:59.215255   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:32:59.238965   79643 main.go:141] libmachine: Using SSH client type: native
	I0210 12:32:59.239204   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I0210 12:32:59.239218   79643 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 12:32:59.467760   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444927
	
	I0210 12:32:59.467796   79643 ubuntu.go:169] provisioning hostname "addons-444927"
	I0210 12:32:59.467860   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:32:59.485324   79643 main.go:141] libmachine: Using SSH client type: native
	I0210 12:32:59.485504   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I0210 12:32:59.485518   79643 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-444927 && echo "addons-444927" | sudo tee /etc/hostname
	I0210 12:32:59.623529   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444927
	
	I0210 12:32:59.623612   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:32:59.640437   79643 main.go:141] libmachine: Using SSH client type: native
	I0210 12:32:59.640700   79643 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32773 <nil> <nil>}
	I0210 12:32:59.640728   79643 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-444927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-444927/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-444927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:32:59.768448   79643 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:32:59.768498   79643 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20390-71607/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-71607/.minikube}
	I0210 12:32:59.768522   79643 ubuntu.go:177] setting up certificates
	I0210 12:32:59.768534   79643 provision.go:84] configureAuth start
	I0210 12:32:59.768619   79643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444927
	I0210 12:32:59.784829   79643 provision.go:143] copyHostCerts
	I0210 12:32:59.784902   79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/ca.pem (1082 bytes)
	I0210 12:32:59.785015   79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/cert.pem (1123 bytes)
	I0210 12:32:59.785076   79643 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/key.pem (1675 bytes)
	I0210 12:32:59.785125   79643 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem org=jenkins.addons-444927 san=[127.0.0.1 192.168.49.2 addons-444927 localhost minikube]
	I0210 12:33:00.067778   79643 provision.go:177] copyRemoteCerts
	I0210 12:33:00.067835   79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:33:00.067868   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:00.084351   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:00.180777   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 12:33:00.202111   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:33:00.223345   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 12:33:00.244932   79643 provision.go:87] duration metric: took 476.378012ms to configureAuth
	I0210 12:33:00.244972   79643 ubuntu.go:193] setting minikube options for container-runtime
	I0210 12:33:00.245142   79643 config.go:182] Loaded profile config "addons-444927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:33:00.245154   79643 machine.go:96] duration metric: took 1.029975044s to provisionDockerMachine
	I0210 12:33:00.245161   79643 client.go:171] duration metric: took 13.870552404s to LocalClient.Create
	I0210 12:33:00.245177   79643 start.go:167] duration metric: took 13.870614609s to libmachine.API.Create "addons-444927"
	I0210 12:33:00.245186   79643 start.go:293] postStartSetup for "addons-444927" (driver="docker")
	I0210 12:33:00.245195   79643 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:33:00.245240   79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:33:00.245273   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:00.261834   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:00.353181   79643 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:33:00.356287   79643 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0210 12:33:00.356330   79643 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0210 12:33:00.356344   79643 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0210 12:33:00.356353   79643 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0210 12:33:00.356365   79643 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-71607/.minikube/addons for local assets ...
	I0210 12:33:00.356439   79643 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-71607/.minikube/files for local assets ...
	I0210 12:33:00.356490   79643 start.go:296] duration metric: took 111.296631ms for postStartSetup
	I0210 12:33:00.356787   79643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444927
	I0210 12:33:00.373260   79643 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/config.json ...
	I0210 12:33:00.373505   79643 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:33:00.373603   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:00.389977   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:00.477112   79643 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0210 12:33:00.481085   79643 start.go:128] duration metric: took 14.108767844s to createHost
	I0210 12:33:00.481152   79643 start.go:83] releasing machines lock for "addons-444927", held for 14.108913022s
	I0210 12:33:00.481227   79643 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-444927
	I0210 12:33:00.497622   79643 ssh_runner.go:195] Run: cat /version.json
	I0210 12:33:00.497685   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:00.497725   79643 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 12:33:00.497815   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:00.514856   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:00.514978   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:00.670869   79643 ssh_runner.go:195] Run: systemctl --version
	I0210 12:33:00.675037   79643 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 12:33:00.679064   79643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0210 12:33:00.701681   79643 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0210 12:33:00.701753   79643 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:33:00.727117   79643 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0210 12:33:00.727142   79643 start.go:495] detecting cgroup driver to use...
	I0210 12:33:00.727175   79643 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0210 12:33:00.727217   79643 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 12:33:00.738573   79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:33:00.749012   79643 docker.go:217] disabling cri-docker service (if available) ...
	I0210 12:33:00.749070   79643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 12:33:00.761823   79643 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 12:33:00.774908   79643 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 12:33:00.850817   79643 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 12:33:00.931903   79643 docker.go:233] disabling docker service ...
	I0210 12:33:00.931980   79643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 12:33:00.949608   79643 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 12:33:00.960603   79643 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 12:33:01.038638   79643 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 12:33:01.122211   79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 12:33:01.132787   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:33:01.147083   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 12:33:01.155684   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 12:33:01.164180   79643 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 12:33:01.164236   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 12:33:01.172718   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:33:01.181029   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 12:33:01.189141   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:33:01.197558   79643 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:33:01.205840   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 12:33:01.214568   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 12:33:01.223418   79643 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 12:33:01.232422   79643 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:33:01.240229   79643 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:33:01.247880   79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:33:01.318524   79643 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 12:33:01.416117   79643 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0210 12:33:01.416194   79643 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0210 12:33:01.419682   79643 start.go:563] Will wait 60s for crictl version
	I0210 12:33:01.419727   79643 ssh_runner.go:195] Run: which crictl
	I0210 12:33:01.422719   79643 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:33:01.454910   79643 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0210 12:33:01.455010   79643 ssh_runner.go:195] Run: containerd --version
	I0210 12:33:01.476888   79643 ssh_runner.go:195] Run: containerd --version
	I0210 12:33:01.500523   79643 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
	I0210 12:33:01.501913   79643 cli_runner.go:164] Run: docker network inspect addons-444927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0210 12:33:01.518074   79643 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0210 12:33:01.521686   79643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:33:01.531566   79643 kubeadm.go:883] updating cluster {Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 12:33:01.531685   79643 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 12:33:01.531732   79643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:33:01.562974   79643 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 12:33:01.563001   79643 containerd.go:534] Images already preloaded, skipping extraction
	I0210 12:33:01.563047   79643 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:33:01.592555   79643 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 12:33:01.592579   79643 cache_images.go:84] Images are preloaded, skipping loading
	I0210 12:33:01.592587   79643 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 containerd true true} ...
	I0210 12:33:01.592682   79643 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-444927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:33:01.592735   79643 ssh_runner.go:195] Run: sudo crictl info
	I0210 12:33:01.623452   79643 cni.go:84] Creating CNI manager for ""
	I0210 12:33:01.623477   79643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 12:33:01.623486   79643 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 12:33:01.623507   79643 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-444927 NodeName:addons-444927 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 12:33:01.623615   79643 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-444927"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 12:33:01.623671   79643 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:33:01.631750   79643 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:33:01.631825   79643 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 12:33:01.639867   79643 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 12:33:01.655786   79643 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:33:01.671898   79643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0210 12:33:01.687899   79643 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0210 12:33:01.691139   79643 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:33:01.700866   79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:33:01.773557   79643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:33:01.785744   79643 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927 for IP: 192.168.49.2
	I0210 12:33:01.785779   79643 certs.go:194] generating shared ca certs ...
	I0210 12:33:01.785794   79643 certs.go:226] acquiring lock for ca certs: {Name:mked3bdcf754b16a474f1226f12a3cc337a7b998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:01.785949   79643 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key
	I0210 12:33:01.922615   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt ...
	I0210 12:33:01.922647   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt: {Name:mkef3eef409099ff0f7e44091834829fbad35c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:01.922817   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key ...
	I0210 12:33:01.922828   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key: {Name:mk510fc2adf34c3fc31ae26cb281e5b8ef5ec290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:01.922905   79643 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key
	I0210 12:33:02.094700   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.crt ...
	I0210 12:33:02.094732   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.crt: {Name:mk889e108ee6d8144896b8270af91bb2b556eda1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.094887   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key ...
	I0210 12:33:02.094898   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key: {Name:mk9c02ad352a509adb756091b4a5154f9677764d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.094966   79643 certs.go:256] generating profile certs ...
	I0210 12:33:02.095031   79643 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.key
	I0210 12:33:02.095047   79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt with IP's: []
	I0210 12:33:02.266358   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt ...
	I0210 12:33:02.266388   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: {Name:mk23970da64f703f5906c3bd636af5390226c140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.266544   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.key ...
	I0210 12:33:02.266554   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.key: {Name:mk6f35996bfc4d32b768f388cc84408562d576a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.266622   79643 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb
	I0210 12:33:02.266640   79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0210 12:33:02.401191   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb ...
	I0210 12:33:02.401223   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb: {Name:mk7c0503d9797503b266466a39d8a570eeb5c34c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.401377   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb ...
	I0210 12:33:02.401390   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb: {Name:mkae929dc7ca816ce3b467ef209a3ef4562dfbff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.401458   79643 certs.go:381] copying /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt.f68b14cb -> /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt
	I0210 12:33:02.401527   79643 certs.go:385] copying /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key.f68b14cb -> /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key
	I0210 12:33:02.401570   79643 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key
	I0210 12:33:02.401588   79643 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt with IP's: []
	I0210 12:33:02.532153   79643 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt ...
	I0210 12:33:02.532189   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt: {Name:mk9cb3b3502b3d4f0b30dd8eab54a1fb94cedbd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.532392   79643 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key ...
	I0210 12:33:02.532414   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key: {Name:mk569b49c57735c308242a9566a3f99c6d61a13d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:02.532659   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 12:33:02.532702   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem (1082 bytes)
	I0210 12:33:02.532731   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem (1123 bytes)
	I0210 12:33:02.532766   79643 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem (1675 bytes)
	I0210 12:33:02.533319   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:33:02.556004   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:33:02.578132   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:33:02.600143   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 12:33:02.621953   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0210 12:33:02.643955   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 12:33:02.666085   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 12:33:02.687771   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 12:33:02.709315   79643 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:33:02.730785   79643 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 12:33:02.747042   79643 ssh_runner.go:195] Run: openssl version
	I0210 12:33:02.752300   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:33:02.761012   79643 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:33:02.764162   79643 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:33:02.764221   79643 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:33:02.770444   79643 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:33:02.778794   79643 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:33:02.781832   79643 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:33:02.781884   79643 kubeadm.go:392] StartCluster: {Name:addons-444927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-444927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:33:02.781982   79643 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0210 12:33:02.782045   79643 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 12:33:02.813399   79643 cri.go:89] found id: ""
	I0210 12:33:02.813469   79643 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 12:33:02.821478   79643 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 12:33:02.829378   79643 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0210 12:33:02.829429   79643 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 12:33:02.837541   79643 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:33:02.837565   79643 kubeadm.go:157] found existing configuration files:
	
	I0210 12:33:02.837614   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 12:33:02.845977   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:33:02.846075   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 12:33:02.853868   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 12:33:02.861974   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:33:02.862045   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 12:33:02.869989   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 12:33:02.878067   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:33:02.878119   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 12:33:02.885629   79643 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 12:33:02.893262   79643 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:33:02.893340   79643 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 12:33:02.900797   79643 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0210 12:33:02.936211   79643 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 12:33:02.936276   79643 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 12:33:02.952069   79643 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0210 12:33:02.952230   79643 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0210 12:33:02.952305   79643 kubeadm.go:310] OS: Linux
	I0210 12:33:02.952385   79643 kubeadm.go:310] CGROUPS_CPU: enabled
	I0210 12:33:02.952461   79643 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0210 12:33:02.952541   79643 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0210 12:33:02.952607   79643 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0210 12:33:02.952681   79643 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0210 12:33:02.952751   79643 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0210 12:33:02.952812   79643 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0210 12:33:02.952882   79643 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0210 12:33:02.952962   79643 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0210 12:33:03.002868   79643 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 12:33:03.003011   79643 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 12:33:03.003158   79643 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 12:33:03.007775   79643 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 12:33:03.010521   79643 out.go:235]   - Generating certificates and keys ...
	I0210 12:33:03.010635   79643 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 12:33:03.010732   79643 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 12:33:03.261144   79643 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 12:33:03.539869   79643 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 12:33:03.694466   79643 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 12:33:03.760391   79643 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 12:33:03.947188   79643 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 12:33:03.947334   79643 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-444927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0210 12:33:04.034279   79643 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 12:33:04.034393   79643 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-444927 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0210 12:33:04.120259   79643 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 12:33:04.250619   79643 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 12:33:04.377826   79643 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 12:33:04.377894   79643 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 12:33:04.449748   79643 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 12:33:04.830769   79643 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 12:33:04.953602   79643 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 12:33:05.249944   79643 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 12:33:05.415685   79643 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 12:33:05.416138   79643 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 12:33:05.418601   79643 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 12:33:05.420455   79643 out.go:235]   - Booting up control plane ...
	I0210 12:33:05.420574   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 12:33:05.420666   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 12:33:05.421715   79643 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 12:33:05.434153   79643 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:33:05.439211   79643 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:33:05.439306   79643 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 12:33:05.523893   79643 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 12:33:05.524020   79643 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 12:33:06.525284   79643 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001493806s
	I0210 12:33:06.525390   79643 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 12:33:11.026633   79643 kubeadm.go:310] [api-check] The API server is healthy after 4.501315709s
	I0210 12:33:11.037920   79643 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 12:33:11.048457   79643 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 12:33:11.067472   79643 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 12:33:11.067810   79643 kubeadm.go:310] [mark-control-plane] Marking the node addons-444927 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 12:33:11.076130   79643 kubeadm.go:310] [bootstrap-token] Using token: 2ofei0.shg6irm5a7ti5w06
	I0210 12:33:11.077645   79643 out.go:235]   - Configuring RBAC rules ...
	I0210 12:33:11.077843   79643 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 12:33:11.081018   79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 12:33:11.087190   79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 12:33:11.089603   79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 12:33:11.092120   79643 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 12:33:11.094569   79643 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 12:33:11.432371   79643 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 12:33:11.850545   79643 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 12:33:12.432775   79643 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 12:33:12.433534   79643 kubeadm.go:310] 
	I0210 12:33:12.433594   79643 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 12:33:12.433601   79643 kubeadm.go:310] 
	I0210 12:33:12.433661   79643 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 12:33:12.433668   79643 kubeadm.go:310] 
	I0210 12:33:12.433687   79643 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 12:33:12.433740   79643 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 12:33:12.433782   79643 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 12:33:12.433789   79643 kubeadm.go:310] 
	I0210 12:33:12.433834   79643 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 12:33:12.433840   79643 kubeadm.go:310] 
	I0210 12:33:12.433876   79643 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 12:33:12.433883   79643 kubeadm.go:310] 
	I0210 12:33:12.433924   79643 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 12:33:12.433989   79643 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 12:33:12.434047   79643 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 12:33:12.434053   79643 kubeadm.go:310] 
	I0210 12:33:12.434118   79643 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 12:33:12.434183   79643 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 12:33:12.434191   79643 kubeadm.go:310] 
	I0210 12:33:12.434319   79643 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 2ofei0.shg6irm5a7ti5w06 \
	I0210 12:33:12.434482   79643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a04e7adba77e55f6c403d6b6702c62e468700cf463ec68bf30f3cb8b7b5deb33 \
	I0210 12:33:12.434510   79643 kubeadm.go:310] 	--control-plane 
	I0210 12:33:12.434515   79643 kubeadm.go:310] 
	I0210 12:33:12.434591   79643 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 12:33:12.434606   79643 kubeadm.go:310] 
	I0210 12:33:12.434668   79643 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 2ofei0.shg6irm5a7ti5w06 \
	I0210 12:33:12.434815   79643 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a04e7adba77e55f6c403d6b6702c62e468700cf463ec68bf30f3cb8b7b5deb33 
	I0210 12:33:12.437045   79643 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0210 12:33:12.437242   79643 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0210 12:33:12.437355   79643 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 12:33:12.437387   79643 cni.go:84] Creating CNI manager for ""
	I0210 12:33:12.437397   79643 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 12:33:12.439477   79643 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0210 12:33:12.441109   79643 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 12:33:12.444652   79643 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 12:33:12.444669   79643 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0210 12:33:12.461126   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 12:33:12.654936   79643 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 12:33:12.655043   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:12.655064   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-444927 minikube.k8s.io/updated_at=2025_02_10T12_33_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04 minikube.k8s.io/name=addons-444927 minikube.k8s.io/primary=true
	I0210 12:33:12.662283   79643 ops.go:34] apiserver oom_adj: -16
	I0210 12:33:12.737974   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:13.238514   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:13.738725   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:14.238636   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:14.738851   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:15.238406   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:15.738133   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:16.238615   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:16.738130   79643 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:33:16.799926   79643 kubeadm.go:1113] duration metric: took 4.144944938s to wait for elevateKubeSystemPrivileges
	I0210 12:33:16.799969   79643 kubeadm.go:394] duration metric: took 14.018089958s to StartCluster
	I0210 12:33:16.799994   79643 settings.go:142] acquiring lock: {Name:mk48700407fa7ae208a78ae38cd1ed6c94166a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:16.800148   79643 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:33:16.800846   79643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/kubeconfig: {Name:mk5db87da690cfc2ed8765dd4558179e05f09057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:33:16.801037   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 12:33:16.801046   79643 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0210 12:33:16.801105   79643 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0210 12:33:16.801276   79643 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-444927"
	I0210 12:33:16.801301   79643 addons.go:69] Setting yakd=true in profile "addons-444927"
	I0210 12:33:16.801323   79643 config.go:182] Loaded profile config "addons-444927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:33:16.801337   79643 addons.go:69] Setting registry=true in profile "addons-444927"
	I0210 12:33:16.801721   79643 addons.go:238] Setting addon registry=true in "addons-444927"
	I0210 12:33:16.801283   79643 addons.go:69] Setting cloud-spanner=true in profile "addons-444927"
	I0210 12:33:16.801745   79643 addons.go:69] Setting gcp-auth=true in profile "addons-444927"
	I0210 12:33:16.801774   79643 mustload.go:65] Loading cluster: addons-444927
	I0210 12:33:16.801783   79643 addons.go:238] Setting addon cloud-spanner=true in "addons-444927"
	I0210 12:33:16.801806   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.801833   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.801295   79643 addons.go:69] Setting default-storageclass=true in profile "addons-444927"
	I0210 12:33:16.801896   79643 addons.go:69] Setting metrics-server=true in profile "addons-444927"
	I0210 12:33:16.801324   79643 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-444927"
	I0210 12:33:16.801692   79643 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-444927"
	I0210 12:33:16.801983   79643 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-444927"
	I0210 12:33:16.802032   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.802056   79643 addons.go:69] Setting volumesnapshots=true in profile "addons-444927"
	I0210 12:33:16.802074   79643 addons.go:238] Setting addon volumesnapshots=true in "addons-444927"
	I0210 12:33:16.802087   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.802095   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.802578   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.802707   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.802724   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.801365   79643 addons.go:69] Setting inspektor-gadget=true in profile "addons-444927"
	I0210 12:33:16.802865   79643 addons.go:238] Setting addon inspektor-gadget=true in "addons-444927"
	I0210 12:33:16.802893   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.803008   79643 addons.go:238] Setting addon metrics-server=true in "addons-444927"
	I0210 12:33:16.803074   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.803473   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.803654   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.802083   79643 config.go:182] Loaded profile config "addons-444927": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:33:16.804831   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.802723   79643 addons.go:69] Setting ingress=true in profile "addons-444927"
	I0210 12:33:16.805053   79643 addons.go:238] Setting addon ingress=true in "addons-444927"
	I0210 12:33:16.805115   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.801707   79643 addons.go:69] Setting ingress-dns=true in profile "addons-444927"
	I0210 12:33:16.805183   79643 addons.go:238] Setting addon ingress-dns=true in "addons-444927"
	I0210 12:33:16.802710   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.805211   79643 out.go:177] * Verifying Kubernetes components...
	I0210 12:33:16.801928   79643 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-444927"
	I0210 12:33:16.804079   79643 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-444927"
	I0210 12:33:16.806128   79643 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-444927"
	I0210 12:33:16.806168   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.804534   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.806798   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.806854   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.807110   79643 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:33:16.807166   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.805226   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.801961   79643 addons.go:69] Setting volcano=true in profile "addons-444927"
	I0210 12:33:16.807623   79643 addons.go:238] Setting addon volcano=true in "addons-444927"
	I0210 12:33:16.807672   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.801327   79643 addons.go:238] Setting addon yakd=true in "addons-444927"
	I0210 12:33:16.808231   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.803843   79643 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-444927"
	I0210 12:33:16.810228   79643 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-444927"
	I0210 12:33:16.810623   79643 addons.go:69] Setting storage-provisioner=true in profile "addons-444927"
	I0210 12:33:16.810680   79643 addons.go:238] Setting addon storage-provisioner=true in "addons-444927"
	I0210 12:33:16.810721   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.838134   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.838559   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.839035   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.843204   79643 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0210 12:33:16.844601   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.844640   79643 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0210 12:33:16.844654   79643 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0210 12:33:16.844705   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.846404   79643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0210 12:33:16.848437   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.849021   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.851657   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0210 12:33:16.853671   79643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:33:16.854815   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0210 12:33:16.854923   79643 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0210 12:33:16.856200   79643 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 12:33:16.856220   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0210 12:33:16.856277   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.856427   79643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:33:16.861474   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0210 12:33:16.861900   79643 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 12:33:16.861935   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0210 12:33:16.861998   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.864666   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0210 12:33:16.866964   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0210 12:33:16.868311   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0210 12:33:16.870331   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0210 12:33:16.871537   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0210 12:33:16.872747   79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0210 12:33:16.872780   79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0210 12:33:16.872781   79643 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0210 12:33:16.872941   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.873000   79643 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0210 12:33:16.874318   79643 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0210 12:33:16.874343   79643 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0210 12:33:16.874433   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.874435   79643 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0210 12:33:16.874452   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0210 12:33:16.874502   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.882101   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.885760   79643 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-444927"
	I0210 12:33:16.885811   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.886297   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.898880   79643 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0210 12:33:16.898880   79643 out.go:177]   - Using image docker.io/registry:2.8.3
	I0210 12:33:16.900066   79643 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0210 12:33:16.900088   79643 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0210 12:33:16.900152   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.901532   79643 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0210 12:33:16.901533   79643 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0210 12:33:16.902819   79643 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 12:33:16.902843   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0210 12:33:16.902898   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.903130   79643 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0210 12:33:16.903146   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0210 12:33:16.903189   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.908852   79643 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:33:16.910103   79643 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:33:16.910127   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 12:33:16.910188   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.913402   79643 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0210 12:33:16.914782   79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0210 12:33:16.914802   79643 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0210 12:33:16.914859   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.916205   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.919705   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.931685   79643 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0210 12:33:16.933094   79643 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 12:33:16.933122   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0210 12:33:16.933185   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.938961   79643 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
	I0210 12:33:16.939483   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.943779   79643 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
	I0210 12:33:16.944883   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.946437   79643 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
	I0210 12:33:16.949094   79643 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0210 12:33:16.949122   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
	I0210 12:33:16.949184   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.949968   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.951958   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.955010   79643 addons.go:238] Setting addon default-storageclass=true in "addons-444927"
	I0210 12:33:16.955054   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:16.955245   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.955501   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:16.964944   79643 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0210 12:33:16.965016   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.965559   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.967762   79643 out.go:177]   - Using image docker.io/busybox:stable
	I0210 12:33:16.968721   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.969151   79643 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 12:33:16.969168   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0210 12:33:16.969216   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.970981   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.978773   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:16.980092   79643 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 12:33:16.980112   79643 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 12:33:16.980161   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:16.990566   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:17.024017   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	W0210 12:33:17.086366   79643 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0210 12:33:17.086404   79643 retry.go:31] will retry after 310.475021ms: ssh: handshake failed: EOF
	I0210 12:33:17.212008   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0210 12:33:17.219415   79643 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:33:17.219536   79643 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 12:33:17.220033   79643 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0210 12:33:17.220054   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0210 12:33:17.220482   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0210 12:33:17.395012   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0210 12:33:17.403495   79643 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0210 12:33:17.403528   79643 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0210 12:33:17.486191   79643 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0210 12:33:17.486223   79643 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0210 12:33:17.486380   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0210 12:33:17.486484   79643 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0210 12:33:17.486501   79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0210 12:33:17.487139   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0210 12:33:17.497956   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0210 12:33:17.503609   79643 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0210 12:33:17.503700   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0210 12:33:17.607746   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0210 12:33:17.694547   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:33:17.701702   79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0210 12:33:17.701736   79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0210 12:33:17.789601   79643 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0210 12:33:17.789702   79643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0210 12:33:17.790762   79643 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 12:33:17.790790   79643 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0210 12:33:17.804790   79643 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0210 12:33:17.804893   79643 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0210 12:33:17.805936   79643 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0210 12:33:17.806057   79643 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0210 12:33:17.893326   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0210 12:33:17.895640   79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0210 12:33:17.895720   79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0210 12:33:18.098746   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 12:33:18.200130   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0210 12:33:18.287135   79643 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0210 12:33:18.287163   79643 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0210 12:33:18.297557   79643 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0210 12:33:18.297648   79643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0210 12:33:18.386947   79643 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0210 12:33:18.387033   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0210 12:33:18.398777   79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0210 12:33:18.398872   79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0210 12:33:18.786467   79643 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0210 12:33:18.786741   79643 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0210 12:33:18.786700   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0210 12:33:18.799082   79643 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0210 12:33:18.799169   79643 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0210 12:33:18.800278   79643 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0210 12:33:18.800343   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0210 12:33:19.300183   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.088125694s)
	I0210 12:33:19.300304   79643 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.080807397s)
	I0210 12:33:19.301465   79643 node_ready.go:35] waiting up to 6m0s for node "addons-444927" to be "Ready" ...
	I0210 12:33:19.399315   79643 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0210 12:33:19.399406   79643 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0210 12:33:19.405130   79643 node_ready.go:49] node "addons-444927" has status "Ready":"True"
	I0210 12:33:19.405157   79643 node_ready.go:38] duration metric: took 103.619571ms for node "addons-444927" to be "Ready" ...
	I0210 12:33:19.405170   79643 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:33:19.501306   79643 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:19.588953   79643 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.369376678s)
	I0210 12:33:19.589045   79643 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0210 12:33:19.704489   79643 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:33:19.704581   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0210 12:33:19.908163   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0210 12:33:19.996801   79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0210 12:33:19.996832   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0210 12:33:20.092965   79643 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-444927" context rescaled to 1 replicas
	I0210 12:33:20.385014   79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0210 12:33:20.385116   79643 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0210 12:33:20.399436   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:33:20.597102   79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0210 12:33:20.597130   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0210 12:33:21.000145   79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0210 12:33:21.000175   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0210 12:33:21.290846   79643 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 12:33:21.290879   79643 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0210 12:33:21.605576   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:21.696402   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0210 12:33:23.890657   79643 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0210 12:33:23.890734   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:23.918163   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:24.007369   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:24.310882   79643 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0210 12:33:24.410800   79643 addons.go:238] Setting addon gcp-auth=true in "addons-444927"
	I0210 12:33:24.410897   79643 host.go:66] Checking if "addons-444927" exists ...
	I0210 12:33:24.411360   79643 cli_runner.go:164] Run: docker container inspect addons-444927 --format={{.State.Status}}
	I0210 12:33:24.429471   79643 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0210 12:33:24.429515   79643 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-444927
	I0210 12:33:24.445428   79643 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32773 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/addons-444927/id_rsa Username:docker}
	I0210 12:33:26.014149   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:27.013531   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.793016074s)
	I0210 12:33:27.013704   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.618648268s)
	I0210 12:33:27.013738   79643 addons.go:479] Verifying addon ingress=true in "addons-444927"
	I0210 12:33:27.013748   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.526559257s)
	I0210 12:33:27.013816   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (9.406045546s)
	I0210 12:33:27.013887   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.319264024s)
	I0210 12:33:27.013962   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.120554129s)
	I0210 12:33:27.013788   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.527377786s)
	I0210 12:33:27.014019   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.915185644s)
	I0210 12:33:27.013797   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.515813648s)
	I0210 12:33:27.014206   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.813996526s)
	I0210 12:33:27.014225   79643 addons.go:479] Verifying addon metrics-server=true in "addons-444927"
	I0210 12:33:27.014232   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.227346765s)
	I0210 12:33:27.014250   79643 addons.go:479] Verifying addon registry=true in "addons-444927"
	I0210 12:33:27.014273   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.106076426s)
	I0210 12:33:27.014393   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.614928343s)
	W0210 12:33:27.014422   79643 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 12:33:27.014445   79643 retry.go:31] will retry after 125.415761ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0210 12:33:27.015282   79643 out.go:177] * Verifying ingress addon...
	I0210 12:33:27.015995   79643 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-444927 service yakd-dashboard -n yakd-dashboard
	
	I0210 12:33:27.015999   79643 out.go:177] * Verifying registry addon...
	I0210 12:33:27.017498   79643 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0210 12:33:27.018367   79643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0210 12:33:27.088833   79643 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0210 12:33:27.088859   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:27.089435   79643 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0210 12:33:27.089456   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0210 12:33:27.094441   79643 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0210 12:33:27.140441   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0210 12:33:27.590601   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:27.590825   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:27.693815   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.997275868s)
	I0210 12:33:27.693856   79643 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-444927"
	I0210 12:33:27.694158   79643 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.264638761s)
	I0210 12:33:27.695570   79643 out.go:177] * Verifying csi-hostpath-driver addon...
	I0210 12:33:27.695570   79643 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0210 12:33:27.697882   79643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0210 12:33:27.699542   79643 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0210 12:33:27.700672   79643 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0210 12:33:27.700723   79643 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0210 12:33:27.710105   79643 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0210 12:33:27.710129   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:27.803984   79643 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0210 12:33:27.804013   79643 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0210 12:33:27.902637   79643 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 12:33:27.902671   79643 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0210 12:33:27.999396   79643 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0210 12:33:28.088439   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:28.088848   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:28.201773   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:28.507991   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:28.521203   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:28.595952   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:28.701219   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:29.086216   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:29.086471   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:29.201718   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:29.288121   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.147627082s)
	I0210 12:33:29.288181   79643 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288744229s)
	I0210 12:33:29.289590   79643 addons.go:479] Verifying addon gcp-auth=true in "addons-444927"
	I0210 12:33:29.292196   79643 out.go:177] * Verifying gcp-auth addon...
	I0210 12:33:29.294748   79643 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0210 12:33:29.296932   79643 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0210 12:33:29.521089   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:29.521268   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:29.701446   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:30.021180   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:30.021327   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:30.201792   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:30.520708   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:30.520831   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:30.701768   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:31.006361   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:31.021048   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:31.021143   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:31.201816   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:31.521436   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:31.521621   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:31.701460   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:32.021345   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:32.021556   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:32.202178   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:32.521440   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:32.521454   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:32.701544   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:33.020598   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:33.021124   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:33.200718   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:33.506074   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:33.520186   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:33.521266   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:33.701201   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:34.020890   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:34.020924   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:34.200431   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:34.520747   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:34.520943   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:34.700856   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:35.020587   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:35.021288   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:35.201196   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:35.520793   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:35.521193   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:35.701233   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:36.006716   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:36.020823   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:36.021083   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:36.201188   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:36.520428   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:36.521078   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:36.701095   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:37.020683   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:37.020850   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:37.201354   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:37.520816   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:37.520841   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:37.701611   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:38.020412   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:38.020924   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:38.200740   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:38.506568   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:38.520881   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:38.521054   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:38.700836   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:39.020587   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:39.020678   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:39.201254   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:39.522118   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:39.522862   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:39.701373   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:40.021143   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:40.021176   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:40.200977   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:40.520795   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:40.520833   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:40.702040   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:41.005755   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:41.020832   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:41.021187   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:41.201339   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:41.520423   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:41.521257   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:41.701615   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:42.020627   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:42.021014   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:42.201328   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:42.521340   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:42.521607   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:42.701589   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:43.007218   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:43.021315   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:43.021536   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:43.201670   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:43.520647   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:43.521263   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:43.701443   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:44.021244   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:44.021264   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:44.205924   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:44.521094   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:44.521159   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:44.702041   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:45.020742   79643 pod_ready.go:103] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:45.021466   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:45.021593   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:45.201511   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:45.541041   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:45.541207   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:45.701881   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:46.020561   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:46.020830   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:46.201773   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:46.505883   79643 pod_ready.go:93] pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace has status "Ready":"True"
	I0210 12:33:46.505905   79643 pod_ready.go:82] duration metric: took 27.004506937s for pod "coredns-668d6bf9bc-pmclr" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.505915   79643 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.507390   79643 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zrfk6" not found
	I0210 12:33:46.507419   79643 pod_ready.go:82] duration metric: took 1.498927ms for pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace to be "Ready" ...
	E0210 12:33:46.507429   79643 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-zrfk6" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-zrfk6" not found
	I0210 12:33:46.507437   79643 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.510591   79643 pod_ready.go:93] pod "etcd-addons-444927" in "kube-system" namespace has status "Ready":"True"
	I0210 12:33:46.510606   79643 pod_ready.go:82] duration metric: took 3.164494ms for pod "etcd-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.510617   79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.513595   79643 pod_ready.go:93] pod "kube-apiserver-addons-444927" in "kube-system" namespace has status "Ready":"True"
	I0210 12:33:46.513611   79643 pod_ready.go:82] duration metric: took 2.987785ms for pod "kube-apiserver-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.513620   79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.516906   79643 pod_ready.go:93] pod "kube-controller-manager-addons-444927" in "kube-system" namespace has status "Ready":"True"
	I0210 12:33:46.516923   79643 pod_ready.go:82] duration metric: took 3.29807ms for pod "kube-controller-manager-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.516932   79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bhdzg" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.519735   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:46.520516   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:46.701580   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:46.703427   79643 pod_ready.go:93] pod "kube-proxy-bhdzg" in "kube-system" namespace has status "Ready":"True"
	I0210 12:33:46.703449   79643 pod_ready.go:82] duration metric: took 186.511762ms for pod "kube-proxy-bhdzg" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:46.703460   79643 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:47.021297   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:47.021408   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:47.104919   79643 pod_ready.go:93] pod "kube-scheduler-addons-444927" in "kube-system" namespace has status "Ready":"True"
	I0210 12:33:47.104949   79643 pod_ready.go:82] duration metric: took 401.480944ms for pod "kube-scheduler-addons-444927" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:47.104965   79643 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace to be "Ready" ...
	I0210 12:33:47.202288   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:47.520614   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:47.521096   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:47.701157   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:48.020992   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:48.021255   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:48.201491   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:48.521420   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:48.521462   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:48.701557   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:49.021749   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:49.021899   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:49.109785   79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:49.201616   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:49.521166   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:49.521287   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:49.701550   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:50.122437   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:50.122692   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:50.201383   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:50.521274   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:50.521307   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:50.701526   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:51.020439   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:51.020841   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:51.110364   79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:51.201443   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:51.520614   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:51.521154   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:51.701412   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:52.021074   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:52.021074   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:52.202311   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:52.521057   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:52.521078   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:52.701225   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:53.020076   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:53.020897   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:53.201012   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:53.520869   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:53.520945   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:53.609993   79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:53.700838   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:54.021143   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:54.021173   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:54.200911   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:54.520493   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:54.521180   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:54.701640   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:55.020829   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:55.020948   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:55.200814   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:55.520544   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:55.521380   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:55.610124   79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:55.701152   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:56.021953   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:56.022003   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:56.201857   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:56.520546   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:56.521042   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:56.701936   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:57.020505   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:57.021206   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:57.201800   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:57.521507   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:57.521792   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:57.701257   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:58.020752   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:58.020939   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:58.110670   79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
	I0210 12:33:58.201616   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:58.521297   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:58.521400   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:58.701120   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:59.020447   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:59.021033   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:59.201758   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:33:59.521566   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:33:59.521621   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:33:59.701504   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:00.021368   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:00.021432   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:00.201085   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:00.520496   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:00.520875   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:00.610100   79643 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"False"
	I0210 12:34:00.701109   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:01.021536   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:01.021588   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:01.222775   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:01.521199   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:01.521268   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:01.609438   79643 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace has status "Ready":"True"
	I0210 12:34:01.609458   79643 pod_ready.go:82] duration metric: took 14.504486382s for pod "nvidia-device-plugin-daemonset-h5pb4" in "kube-system" namespace to be "Ready" ...
	I0210 12:34:01.609466   79643 pod_ready.go:39] duration metric: took 42.204282274s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:34:01.609490   79643 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:34:01.609547   79643 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:34:01.624528   79643 api_server.go:72] duration metric: took 44.823449986s to wait for apiserver process to appear ...
	I0210 12:34:01.624561   79643 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:34:01.624589   79643 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0210 12:34:01.630346   79643 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0210 12:34:01.631435   79643 api_server.go:141] control plane version: v1.32.1
	I0210 12:34:01.631463   79643 api_server.go:131] duration metric: took 6.893671ms to wait for apiserver health ...
	I0210 12:34:01.631472   79643 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:34:01.635155   79643 system_pods.go:59] 19 kube-system pods found
	I0210 12:34:01.635197   79643 system_pods.go:61] "amd-gpu-device-plugin-tffg2" [cbfc6cf0-103c-44d4-85d7-bb02305be0fb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0210 12:34:01.635206   79643 system_pods.go:61] "coredns-668d6bf9bc-pmclr" [fdb0ba16-77a2-4571-9a91-517bcfa86336] Running
	I0210 12:34:01.635214   79643 system_pods.go:61] "csi-hostpath-attacher-0" [c96bbb2d-25b5-49cb-ac3d-0dfa740a57dc] Running
	I0210 12:34:01.635222   79643 system_pods.go:61] "csi-hostpath-resizer-0" [392cfd8f-12f0-46ab-b74d-47d2d30396c4] Running
	I0210 12:34:01.635230   79643 system_pods.go:61] "csi-hostpathplugin-8sfhb" [4efb0c0d-48cf-4a8c-bd48-7509139a7c09] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 12:34:01.635242   79643 system_pods.go:61] "etcd-addons-444927" [7e441f46-31ed-4b8e-83ef-470541260b8b] Running
	I0210 12:34:01.635248   79643 system_pods.go:61] "kindnet-b2qzd" [3d4a13b3-ab3e-4626-806a-b5ed71164ce3] Running
	I0210 12:34:01.635253   79643 system_pods.go:61] "kube-apiserver-addons-444927" [c4a04cf9-cdc3-4ae7-b94e-def5f025c9a0] Running
	I0210 12:34:01.635261   79643 system_pods.go:61] "kube-controller-manager-addons-444927" [137bde99-d8e3-4d8f-803a-fa7d22ca2569] Running
	I0210 12:34:01.635267   79643 system_pods.go:61] "kube-ingress-dns-minikube" [3a711174-7f5b-48d3-81d5-d11c8305f7e8] Running
	I0210 12:34:01.635275   79643 system_pods.go:61] "kube-proxy-bhdzg" [cd096b8d-0142-4c4a-bb11-eda48b1ef5d7] Running
	I0210 12:34:01.635282   79643 system_pods.go:61] "kube-scheduler-addons-444927" [722f34f2-250a-4f48-8479-774786f34499] Running
	I0210 12:34:01.635290   79643 system_pods.go:61] "metrics-server-7fbb699795-9rzwp" [18f2c184-138a-4ae6-9b10-1f55f0ffe77d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 12:34:01.635299   79643 system_pods.go:61] "nvidia-device-plugin-daemonset-h5pb4" [3346d1a2-d520-442b-8349-6a8ecaea1a6f] Running
	I0210 12:34:01.635305   79643 system_pods.go:61] "registry-6c88467877-gh4sc" [42010757-f6a0-42bd-af45-d200619f078b] Running
	I0210 12:34:01.635316   79643 system_pods.go:61] "registry-proxy-lkxgg" [48863c7e-8f22-4c47-a211-3f269092501f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 12:34:01.635322   79643 system_pods.go:61] "snapshot-controller-68b874b76f-9d48t" [3a3cb3ea-bfa1-43c4-b04a-298b979dab6e] Running
	I0210 12:34:01.635331   79643 system_pods.go:61] "snapshot-controller-68b874b76f-l9jq9" [8e900a1f-21ea-4c89-b7b7-ae34ba60446d] Running
	I0210 12:34:01.635336   79643 system_pods.go:61] "storage-provisioner" [09ac7bfd-a4d4-4e2d-a1fc-1099e247efad] Running
	I0210 12:34:01.635347   79643 system_pods.go:74] duration metric: took 3.866998ms to wait for pod list to return data ...
	I0210 12:34:01.635357   79643 default_sa.go:34] waiting for default service account to be created ...
	I0210 12:34:01.637756   79643 default_sa.go:45] found service account: "default"
	I0210 12:34:01.637777   79643 default_sa.go:55] duration metric: took 2.411649ms for default service account to be created ...
	I0210 12:34:01.637786   79643 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 12:34:01.640908   79643 system_pods.go:86] 19 kube-system pods found
	I0210 12:34:01.640950   79643 system_pods.go:89] "amd-gpu-device-plugin-tffg2" [cbfc6cf0-103c-44d4-85d7-bb02305be0fb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0210 12:34:01.640962   79643 system_pods.go:89] "coredns-668d6bf9bc-pmclr" [fdb0ba16-77a2-4571-9a91-517bcfa86336] Running
	I0210 12:34:01.640972   79643 system_pods.go:89] "csi-hostpath-attacher-0" [c96bbb2d-25b5-49cb-ac3d-0dfa740a57dc] Running
	I0210 12:34:01.640978   79643 system_pods.go:89] "csi-hostpath-resizer-0" [392cfd8f-12f0-46ab-b74d-47d2d30396c4] Running
	I0210 12:34:01.640993   79643 system_pods.go:89] "csi-hostpathplugin-8sfhb" [4efb0c0d-48cf-4a8c-bd48-7509139a7c09] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0210 12:34:01.641003   79643 system_pods.go:89] "etcd-addons-444927" [7e441f46-31ed-4b8e-83ef-470541260b8b] Running
	I0210 12:34:01.641014   79643 system_pods.go:89] "kindnet-b2qzd" [3d4a13b3-ab3e-4626-806a-b5ed71164ce3] Running
	I0210 12:34:01.641024   79643 system_pods.go:89] "kube-apiserver-addons-444927" [c4a04cf9-cdc3-4ae7-b94e-def5f025c9a0] Running
	I0210 12:34:01.641034   79643 system_pods.go:89] "kube-controller-manager-addons-444927" [137bde99-d8e3-4d8f-803a-fa7d22ca2569] Running
	I0210 12:34:01.641046   79643 system_pods.go:89] "kube-ingress-dns-minikube" [3a711174-7f5b-48d3-81d5-d11c8305f7e8] Running
	I0210 12:34:01.641055   79643 system_pods.go:89] "kube-proxy-bhdzg" [cd096b8d-0142-4c4a-bb11-eda48b1ef5d7] Running
	I0210 12:34:01.641061   79643 system_pods.go:89] "kube-scheduler-addons-444927" [722f34f2-250a-4f48-8479-774786f34499] Running
	I0210 12:34:01.641070   79643 system_pods.go:89] "metrics-server-7fbb699795-9rzwp" [18f2c184-138a-4ae6-9b10-1f55f0ffe77d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0210 12:34:01.641080   79643 system_pods.go:89] "nvidia-device-plugin-daemonset-h5pb4" [3346d1a2-d520-442b-8349-6a8ecaea1a6f] Running
	I0210 12:34:01.641087   79643 system_pods.go:89] "registry-6c88467877-gh4sc" [42010757-f6a0-42bd-af45-d200619f078b] Running
	I0210 12:34:01.641098   79643 system_pods.go:89] "registry-proxy-lkxgg" [48863c7e-8f22-4c47-a211-3f269092501f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0210 12:34:01.641107   79643 system_pods.go:89] "snapshot-controller-68b874b76f-9d48t" [3a3cb3ea-bfa1-43c4-b04a-298b979dab6e] Running
	I0210 12:34:01.641116   79643 system_pods.go:89] "snapshot-controller-68b874b76f-l9jq9" [8e900a1f-21ea-4c89-b7b7-ae34ba60446d] Running
	I0210 12:34:01.641126   79643 system_pods.go:89] "storage-provisioner" [09ac7bfd-a4d4-4e2d-a1fc-1099e247efad] Running
	I0210 12:34:01.641140   79643 system_pods.go:126] duration metric: took 3.346346ms to wait for k8s-apps to be running ...
	I0210 12:34:01.641153   79643 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:34:01.641211   79643 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:34:01.655042   79643 system_svc.go:56] duration metric: took 13.87799ms WaitForService to wait for kubelet
	I0210 12:34:01.655074   79643 kubeadm.go:582] duration metric: took 44.854004154s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:34:01.655105   79643 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:34:01.657661   79643 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0210 12:34:01.657704   79643 node_conditions.go:123] node cpu capacity is 8
	I0210 12:34:01.657721   79643 node_conditions.go:105] duration metric: took 2.610448ms to run NodePressure ...
	I0210 12:34:01.657738   79643 start.go:241] waiting for startup goroutines ...
	I0210 12:34:01.735138   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:02.031635   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:02.031727   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:02.232204   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:02.521733   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:02.521750   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:02.701436   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:03.021000   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:03.021316   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:03.221051   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:03.520832   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:03.521330   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:03.701048   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:04.020854   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:04.021027   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:04.201724   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:04.521464   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:04.521589   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:04.701770   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:05.021539   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:05.021601   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:05.201546   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:05.520681   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:05.520800   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:05.702086   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:06.020829   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:06.021247   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:06.201270   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:06.521463   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:06.521563   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:06.701406   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:07.020414   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:07.021068   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:07.201339   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:07.520632   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:07.521178   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:07.701209   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:08.021554   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:08.021637   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:08.201411   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:08.521343   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:08.521399   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:08.701848   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:09.021283   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:09.021334   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:09.200955   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:09.520738   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:09.521147   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:09.701197   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:10.021409   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0210 12:34:10.021451   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:10.201998   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:10.521456   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:10.521721   79643 kapi.go:107] duration metric: took 43.503349444s to wait for kubernetes.io/minikube-addons=registry ...
	I0210 12:34:10.702483   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:11.021247   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:11.201330   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:11.520737   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:11.701678   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:12.020696   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:12.201706   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:12.520827   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:12.700908   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:13.021038   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:13.200709   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:13.521854   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:13.701826   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:14.020645   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:14.201953   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:14.520854   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:14.701821   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:15.020508   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:15.201789   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:15.520791   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:15.701560   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:16.021369   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:16.201775   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:16.521226   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:16.701150   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:17.020725   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:17.201578   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:17.521254   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:17.700729   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:18.021429   79643 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0210 12:34:18.201576   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:18.521712   79643 kapi.go:107] duration metric: took 51.50420804s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0210 12:34:18.701598   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:19.201146   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:19.702016   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:20.204254   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:20.701640   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:21.201703   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:21.701095   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:22.200909   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0210 12:34:22.701644   79643 kapi.go:107] duration metric: took 55.00376107s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0210 12:34:52.297870   79643 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0210 12:34:52.297895   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:34:52.797993   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:34:53.297526   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:34:53.797559   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:34:54.297557   79643 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0210 12:34:54.798550   79643 kapi.go:107] duration metric: took 1m25.503798382s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0210 12:34:54.800386   79643 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-444927 cluster.
	I0210 12:34:54.801893   79643 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0210 12:34:54.803268   79643 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0210 12:34:54.804681   79643 out.go:177] * Enabled addons: cloud-spanner, volcano, nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0210 12:34:54.806202   79643 addons.go:514] duration metric: took 1m38.005101774s for enable addons: enabled=[cloud-spanner volcano nvidia-device-plugin amd-gpu-device-plugin storage-provisioner inspektor-gadget ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0210 12:34:54.806244   79643 start.go:246] waiting for cluster config update ...
	I0210 12:34:54.806272   79643 start.go:255] writing updated cluster config ...
	I0210 12:34:54.806535   79643 ssh_runner.go:195] Run: rm -f paused
	I0210 12:34:54.857164   79643 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 12:34:54.858976   79643 out.go:177] * Done! kubectl is now configured to use "addons-444927" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d8ca48ba1c44f       56cc512116c8f       3 minutes ago       Running             busybox                   0                   df487e63ae9d5       busybox
	335a0ee58f465       ee44bc2368033       4 minutes ago       Running             controller                0                   2f287e6d0edf2       ingress-nginx-controller-56d7c84fd4-zbfkg
	0cec2cbf4135a       e16d1e3a10667       4 minutes ago       Running             local-path-provisioner    0                   3d3c2ca75f484       local-path-provisioner-76f89f99b5-hllh6
	a870218b99697       a62eeff05ba51       5 minutes ago       Exited              patch                     2                   502eca8d57fe2       ingress-nginx-admission-patch-kwk2g
	85e22ccf2fd29       a62eeff05ba51       5 minutes ago       Exited              create                    0                   097560c943fa3       ingress-nginx-admission-create-zsvgr
	8265f7e56e2bb       c69fa2e9cbf5f       5 minutes ago       Running             coredns                   0                   7ebb694aa2770       coredns-668d6bf9bc-pmclr
	13fb15fed8d27       30dd67412fdea       5 minutes ago       Running             minikube-ingress-dns      0                   fe96d89ee65c6       kube-ingress-dns-minikube
	bb21e3efe3fc2       d300845f67aeb       5 minutes ago       Running             kindnet-cni               0                   c3cf0810b88ff       kindnet-b2qzd
	7bef7a777b3e3       6e38f40d628db       5 minutes ago       Running             storage-provisioner       0                   d165d6f754cc7       storage-provisioner
	4e15a64a6e3a3       e29f9c7391fd9       5 minutes ago       Running             kube-proxy                0                   b0f73a109986b       kube-proxy-bhdzg
	6b5511caeb4b6       95c0bda56fc4d       5 minutes ago       Running             kube-apiserver            0                   5edfcf3c6599b       kube-apiserver-addons-444927
	3689951b3e8e3       a9e7e6b294baf       5 minutes ago       Running             etcd                      0                   07e68c6d1f553       etcd-addons-444927
	f0141e94893c8       2b0d6572d062c       5 minutes ago       Running             kube-scheduler            0                   c83ce2f92899c       kube-scheduler-addons-444927
	285b1d1cd9a34       019ee182b58e2       5 minutes ago       Running             kube-controller-manager   0                   fc5b57769b2c7       kube-controller-manager-addons-444927
	
	
	==> containerd <==
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.279550754Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"204b3fdd994b9b12ae512dae5aa0cd650d6a6001000c83219cd4d35c4491f59d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.279621064Z" level=info msg="RemovePodSandbox \"204b3fdd994b9b12ae512dae5aa0cd650d6a6001000c83219cd4d35c4491f59d\" returns successfully"
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.280134841Z" level=info msg="StopPodSandbox for \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\""
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287306406Z" level=info msg="TearDown network for sandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" successfully"
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287335074Z" level=info msg="StopPodSandbox for \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" returns successfully"
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287818887Z" level=info msg="RemovePodSandbox for \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\""
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.287851476Z" level=info msg="Forcibly stopping sandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\""
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.295181893Z" level=info msg="TearDown network for sandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" successfully"
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.299442048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Feb 10 12:37:12 addons-444927 containerd[860]: time="2025-02-10T12:37:12.299511244Z" level=info msg="RemovePodSandbox \"863a65fc406e5765a824682d8117cea9e2d857f8d106b33c8c455c602557bbf5\" returns successfully"
	Feb 10 12:37:25 addons-444927 containerd[860]: time="2025-02-10T12:37:25.694912719Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Feb 10 12:37:25 addons-444927 containerd[860]: time="2025-02-10T12:37:25.696854236Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:37:25 addons-444927 containerd[860]: time="2025-02-10T12:37:25.958308893Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:37:26 addons-444927 containerd[860]: time="2025-02-10T12:37:26.573754007Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:37:26 addons-444927 containerd[860]: time="2025-02-10T12:37:26.573812308Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=11042"
	Feb 10 12:37:38 addons-444927 containerd[860]: time="2025-02-10T12:37:38.694542017Z" level=info msg="PullImage \"busybox:stable\""
	Feb 10 12:37:38 addons-444927 containerd[860]: time="2025-02-10T12:37:38.696619550Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:37:39 addons-444927 containerd[860]: time="2025-02-10T12:37:39.070245317Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:37:39 addons-444927 containerd[860]: time="2025-02-10T12:37:39.682588005Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:37:39 addons-444927 containerd[860]: time="2025-02-10T12:37:39.682645631Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=11054"
	Feb 10 12:38:51 addons-444927 containerd[860]: time="2025-02-10T12:38:51.694814666Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Feb 10 12:38:51 addons-444927 containerd[860]: time="2025-02-10T12:38:51.696890381Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:38:51 addons-444927 containerd[860]: time="2025-02-10T12:38:51.972534534Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:38:52 addons-444927 containerd[860]: time="2025-02-10T12:38:52.754437670Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:38:52 addons-444927 containerd[860]: time="2025-02-10T12:38:52.754500375Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=21399"
	
	
	==> coredns [8265f7e56e2bb889b5828efc038b36fa8cc3c87eb1f2499ab085aa4454899dcc] <==
	[INFO] 10.244.0.16:46096 - 49638 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144415s
	[INFO] 10.244.0.16:53262 - 33490 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00356957s
	[INFO] 10.244.0.16:53262 - 33856 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.003971905s
	[INFO] 10.244.0.16:42348 - 26371 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004565804s
	[INFO] 10.244.0.16:42348 - 26084 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005381761s
	[INFO] 10.244.0.16:48195 - 10688 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005672715s
	[INFO] 10.244.0.16:48195 - 10353 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.007116525s
	[INFO] 10.244.0.16:43289 - 1126 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110088s
	[INFO] 10.244.0.16:43289 - 1426 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000169879s
	[INFO] 10.244.0.26:57046 - 6317 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184454s
	[INFO] 10.244.0.26:44269 - 32005 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00027705s
	[INFO] 10.244.0.26:41952 - 49104 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123825s
	[INFO] 10.244.0.26:51853 - 47082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00017984s
	[INFO] 10.244.0.26:41441 - 20398 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126396s
	[INFO] 10.244.0.26:57457 - 62700 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125438s
	[INFO] 10.244.0.26:46468 - 22650 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.006806491s
	[INFO] 10.244.0.26:59530 - 24465 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.008673079s
	[INFO] 10.244.0.26:47549 - 30380 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006622696s
	[INFO] 10.244.0.26:49073 - 7764 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007343908s
	[INFO] 10.244.0.26:55471 - 59180 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005630425s
	[INFO] 10.244.0.26:57911 - 54108 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006281675s
	[INFO] 10.244.0.26:36166 - 30819 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000715616s
	[INFO] 10.244.0.26:37151 - 17899 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000758884s
	[INFO] 10.244.0.31:55875 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000290764s
	[INFO] 10.244.0.31:34599 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00016092s
	
	
	==> describe nodes <==
	Name:               addons-444927
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-444927
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04
	                    minikube.k8s.io/name=addons-444927
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T12_33_12_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-444927
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:33:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-444927
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:38:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:36:15 +0000   Mon, 10 Feb 2025 12:33:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:36:15 +0000   Mon, 10 Feb 2025 12:33:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:36:15 +0000   Mon, 10 Feb 2025 12:33:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:36:15 +0000   Mon, 10 Feb 2025 12:33:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-444927
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 84d0349545ea4184a13e466359bce586
	  System UUID:                790b434d-ab01-481d-9c8e-24468aad0754
	  Boot ID:                    1d7cad77-75d7-418d-a590-e8096751a144
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-zbfkg    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m39s
	  kube-system                 coredns-668d6bf9bc-pmclr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m47s
	  kube-system                 etcd-addons-444927                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m52s
	  kube-system                 kindnet-b2qzd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m47s
	  kube-system                 kube-apiserver-addons-444927                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-controller-manager-addons-444927        200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m43s
	  kube-system                 kube-proxy-bhdzg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-scheduler-addons-444927                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  local-path-storage          local-path-provisioner-76f89f99b5-hllh6      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m42s  kube-proxy       
	  Normal   Starting                 5m52s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m52s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  5m52s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m52s  kubelet          Node addons-444927 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m52s  kubelet          Node addons-444927 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m52s  kubelet          Node addons-444927 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m48s  node-controller  Node addons-444927 event: Registered Node addons-444927 in Controller
	
	
	==> dmesg <==
	[Feb10 09:17]  #2
	[  +0.001427]  #3
	[  +0.000000]  #4
	[  +0.003161] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003164] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002021] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002123]  #5
	[  +0.000751]  #6
	[  +0.000811]  #7
	[  +0.060730] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.448106] i8042: Warning: Keylock active
	[  +0.009792] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004111] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001792] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.002113] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001740] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.629359] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026636] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.129242] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [3689951b3e8e3c7756de3ba03de57b66bad31a4b4dc5540700134f77bc24fe01] <==
	{"level":"info","ts":"2025-02-10T12:33:07.513806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-02-10T12:33:07.513821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-10T12:33:07.514744Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:33:07.515473Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:33:07.515473Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-444927 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T12:33:07.515499Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:33:07.515741Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T12:33:07.515777Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T12:33:07.515973Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:33:07.516048Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:33:07.516076Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:33:07.516352Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:33:07.516725Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:33:07.517444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T12:33:07.517509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-10T12:33:44.201892Z","caller":"traceutil/trace.go:171","msg":"trace[199568268] transaction","detail":"{read_only:false; response_revision:1025; number_of_response:1; }","duration":"127.850852ms","start":"2025-02-10T12:33:44.074019Z","end":"2025-02-10T12:33:44.201870Z","steps":["trace[199568268] 'process raft request'  (duration: 126.917033ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:33:50.120765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.804057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-02-10T12:33:50.120851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.923048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:33:50.120861Z","caller":"traceutil/trace.go:171","msg":"trace[10515872] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1088; }","duration":"100.953991ms","start":"2025-02-10T12:33:50.019893Z","end":"2025-02-10T12:33:50.120847Z","steps":["trace[10515872] 'range keys from in-memory index tree'  (duration: 100.737113ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:33:50.120880Z","caller":"traceutil/trace.go:171","msg":"trace[1104687145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1088; }","duration":"100.977882ms","start":"2025-02-10T12:33:50.019893Z","end":"2025-02-10T12:33:50.120871Z","steps":["trace[1104687145] 'range keys from in-memory index tree'  (duration: 100.853237ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:35:23.415346Z","caller":"traceutil/trace.go:171","msg":"trace[748944253] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1540; }","duration":"204.198818ms","start":"2025-02-10T12:35:23.211128Z","end":"2025-02-10T12:35:23.415326Z","steps":["trace[748944253] 'process raft request'  (duration: 173.949813ms)","trace[748944253] 'compare'  (duration: 29.964626ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T12:35:23.415401Z","caller":"traceutil/trace.go:171","msg":"trace[1248621682] linearizableReadLoop","detail":"{readStateIndex:1590; appliedIndex:1589; }","duration":"203.708869ms","start":"2025-02-10T12:35:23.211683Z","end":"2025-02-10T12:35:23.415392Z","steps":["trace[1248621682] 'read index received'  (duration: 173.402762ms)","trace[1248621682] 'applied index is now lower than readState.Index'  (duration: 30.305504ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T12:35:23.415356Z","caller":"traceutil/trace.go:171","msg":"trace[1667749101] transaction","detail":"{read_only:false; response_revision:1541; number_of_response:1; }","duration":"202.520539ms","start":"2025-02-10T12:35:23.212817Z","end":"2025-02-10T12:35:23.415338Z","steps":["trace[1667749101] 'process raft request'  (duration: 202.440173ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:35:23.415702Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.993785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/batch.volcano.sh/jobs/my-volcano/test-job\" limit:1 ","response":"range_response_count:1 size:1709"}
	{"level":"info","ts":"2025-02-10T12:35:23.415734Z","caller":"traceutil/trace.go:171","msg":"trace[727161755] range","detail":"{range_begin:/registry/batch.volcano.sh/jobs/my-volcano/test-job; range_end:; response_count:1; response_revision:1541; }","duration":"204.061202ms","start":"2025-02-10T12:35:23.211664Z","end":"2025-02-10T12:35:23.415726Z","steps":["trace[727161755] 'agreement among raft nodes before linearized reading'  (duration: 203.933612ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:39:03 up  3:21,  0 users,  load average: 0.19, 0.48, 0.27
	Linux addons-444927 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [bb21e3efe3fc23c6809548162b3e50334b811be602da16e741419cc39d3a6a5f] <==
	I0210 12:36:57.688967       1 main.go:301] handling current node
	I0210 12:37:07.688561       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:37:07.688605       1 main.go:301] handling current node
	I0210 12:37:17.685738       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:37:17.685783       1 main.go:301] handling current node
	I0210 12:37:27.685670       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:37:27.685727       1 main.go:301] handling current node
	I0210 12:37:37.689658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:37:37.689696       1 main.go:301] handling current node
	I0210 12:37:47.687617       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:37:47.687665       1 main.go:301] handling current node
	I0210 12:37:57.685176       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:37:57.685218       1 main.go:301] handling current node
	I0210 12:38:07.688149       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:38:07.688184       1 main.go:301] handling current node
	I0210 12:38:17.692833       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:38:17.692880       1 main.go:301] handling current node
	I0210 12:38:27.685858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:38:27.685897       1 main.go:301] handling current node
	I0210 12:38:37.685639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:38:37.685683       1 main.go:301] handling current node
	I0210 12:38:47.694485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:38:47.694543       1 main.go:301] handling current node
	I0210 12:38:57.693675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:38:57.693728       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6b5511caeb4b64a1e5025cdeeac686e0b5c81a0cbd9e5527f0b21e5f070a8cba] <==
	W0210 12:35:24.617006       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0210 12:35:24.703131       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0210 12:35:24.791335       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0210 12:35:25.091032       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0210 12:35:25.390330       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0210 12:35:40.784499       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37964: use of closed network connection
	E0210 12:35:40.939778       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37988: use of closed network connection
	I0210 12:35:50.508707       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.0.255"}
	I0210 12:36:04.788645       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0210 12:36:04.964085       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.248.3"}
	I0210 12:36:07.720554       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0210 12:36:08.835857       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0210 12:36:16.253022       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0210 12:36:35.251817       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0210 12:37:00.602649       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:37:00.602701       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:37:00.616758       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:37:00.616819       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:37:00.629321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:37:00.629369       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0210 12:37:00.640167       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0210 12:37:00.640206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0210 12:37:01.622485       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0210 12:37:01.640686       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0210 12:37:01.791532       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [285b1d1cd9a34f02e87f67f815e46e3710a9f2a4e94e679386dc52edfd107381] <==
	E0210 12:38:41.295382       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:38:41.962468       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:38:41.963326       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="scheduling.volcano.sh/v1beta1, Resource=podgroups"
	W0210 12:38:41.964156       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:38:41.964182       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:38:42.136263       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:38:42.137120       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobtemplates"
	W0210 12:38:42.137895       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:38:42.137924       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:38:47.850875       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:38:47.851724       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="nodeinfo.volcano.sh/v1alpha1, Resource=numatopologies"
	W0210 12:38:47.852534       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:38:47.852565       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:38:53.980528       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:38:53.981440       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="batch.volcano.sh/v1alpha1, Resource=jobs"
	W0210 12:38:53.982202       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:38:53.982233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:38:55.172891       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:38:55.173747       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0210 12:38:55.174499       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:38:55.174529       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0210 12:38:55.347791       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0210 12:38:55.348672       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0210 12:38:55.349497       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0210 12:38:55.349527       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [4e15a64a6e3a3cbb2b69641a157a914bffeef73fd8f8bda49180cdb370fad050] <==
	I0210 12:33:19.691152       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:33:20.503292       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0210 12:33:20.503369       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:33:20.888874       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0210 12:33:20.888940       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:33:20.892705       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:33:20.893247       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:33:20.893262       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:33:20.895668       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:33:20.895697       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:33:20.895793       1 config.go:199] "Starting service config controller"
	I0210 12:33:20.895800       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:33:20.896238       1 config.go:329] "Starting node config controller"
	I0210 12:33:20.896247       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:33:20.996013       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:33:20.996059       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:33:20.998260       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f0141e94893c804b94acf56c906d0009941f8ca8333aa34efcfd459e91e885f0] <==
	W0210 12:33:09.209806       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0210 12:33:09.210084       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0210 12:33:09.210099       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:09.209783       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0210 12:33:09.210102       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0210 12:33:09.210118       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:09.209896       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 12:33:09.210141       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:09.210008       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:33:09.210161       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:10.055355       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0210 12:33:10.055412       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:10.058582       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0210 12:33:10.058618       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:10.153217       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 12:33:10.153255       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:10.199509       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0210 12:33:10.199545       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:33:10.236533       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0210 12:33:10.236581       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 12:33:10.236581       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0210 12:33:10.236598       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 12:33:10.347106       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 12:33:10.347146       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:33:12.207728       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 12:37:23 addons-444927 kubelet[1601]: E0210 12:37:23.694275    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.574071    1601 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.574146    1601 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.574292    1601 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2nr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 10 12:37:26 addons-444927 kubelet[1601]: E0210 12:37:26.575498    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
	Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.682878    1601 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.682952    1601 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.683081    1601 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:busybox,Image:busybox:stable,Command:[sh -c echo 'local-path-provisioner' > /test/file1],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:data,ReadOnly:false,MountPath:/test,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvtsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil
,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-local-path_default(99e0a41e-dea7-4fc3-a083-fa0680179d33): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 10 12:37:39 addons-444927 kubelet[1601]: E0210 12:37:39.684278    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	Feb 10 12:37:40 addons-444927 kubelet[1601]: E0210 12:37:40.694782    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
	Feb 10 12:37:50 addons-444927 kubelet[1601]: E0210 12:37:50.694718    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	Feb 10 12:37:53 addons-444927 kubelet[1601]: E0210 12:37:53.694391    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
	Feb 10 12:38:03 addons-444927 kubelet[1601]: E0210 12:38:03.694635    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	Feb 10 12:38:05 addons-444927 kubelet[1601]: I0210 12:38:05.693890    1601 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 10 12:38:07 addons-444927 kubelet[1601]: E0210 12:38:07.694145    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
	Feb 10 12:38:15 addons-444927 kubelet[1601]: E0210 12:38:15.694191    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	Feb 10 12:38:22 addons-444927 kubelet[1601]: E0210 12:38:22.694797    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
	Feb 10 12:38:30 addons-444927 kubelet[1601]: E0210 12:38:30.694156    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	Feb 10 12:38:37 addons-444927 kubelet[1601]: E0210 12:38:37.694741    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
	Feb 10 12:38:44 addons-444927 kubelet[1601]: E0210 12:38:44.694849    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.754711    1601 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.754777    1601 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.754886    1601 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2nr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_defaul
t(7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 10 12:38:52 addons-444927 kubelet[1601]: E0210 12:38:52.756115    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0"
	Feb 10 12:38:58 addons-444927 kubelet[1601]: E0210 12:38:58.694350    1601 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="99e0a41e-dea7-4fc3-a083-fa0680179d33"
	
	
	==> storage-provisioner [7bef7a777b3e3d6550f446e15e90a6819656264468867661440ae2788e0f6aaa] <==
	I0210 12:33:23.586020       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 12:33:23.597839       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 12:33:23.597886       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0210 12:33:23.605184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0210 12:33:23.605348       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-444927_c2111d1a-8855-4840-ba1d-d84eee9e2148!
	I0210 12:33:23.605918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32bfaebe-00fa-401b-b378-8aa3da4fba33", APIVersion:"v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-444927_c2111d1a-8855-4840-ba1d-d84eee9e2148 became leader
	I0210 12:33:23.706462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-444927_c2111d1a-8855-4840-ba1d-d84eee9e2148!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-444927 -n addons-444927
helpers_test.go:261: (dbg) Run:  kubectl --context addons-444927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-444927 describe pod nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-444927 describe pod nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g: exit status 1 (69.8659ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-444927/192.168.49.2
	Start Time:       Mon, 10 Feb 2025 12:36:04 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j2nr6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j2nr6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/nginx to addons-444927
	  Warning  Failed     98s (x4 over 2m58s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    27s (x9 over 2m58s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     27s (x9 over 2m58s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    13s (x5 over 2m59s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12s (x5 over 2m58s)  kubelet            Error: ErrImagePull
	  Warning  Failed     12s                  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-444927/192.168.49.2
	Start Time:       Mon, 10 Feb 2025 12:36:01 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.32
	IPs:
	  IP:  10.244.0.32
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvtsj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-qvtsj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/test-local-path to addons-444927
	  Warning  Failed     3m1s                 kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:afa67e3cea50ce204060a6c0113bd63cb289cc0f555d5a80a3bb7c0f62b95add: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    86s (x4 over 3m2s)   kubelet            Pulling image "busybox:stable"
	  Warning  Failed     85s (x4 over 3m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     85s (x3 over 2m46s)  kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:71b79694b71639e633452f57fd9de40595d524de308349218d9a6a144b40be02: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    6s (x11 over 3m1s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     6s (x11 over 3m1s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-zsvgr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kwk2g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-444927 describe pod nginx test-local-path ingress-nginx-admission-create-zsvgr ingress-nginx-admission-patch-kwk2g: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/LocalPath (188.03s)

                                                
                                    
x
+
TestDockerEnvContainerd (36.99s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-014550 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-014550 --driver=docker  --container-runtime=containerd: (20.184757824s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-014550"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-3MSm2oxK9FVF/agent.105073" SSH_AGENT_PID="105074" DOCKER_HOST=ssh://docker@127.0.0.1:32778 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-3MSm2oxK9FVF/agent.105073" SSH_AGENT_PID="105074" DOCKER_HOST=ssh://docker@127.0.0.1:32778 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-3MSm2oxK9FVF/agent.105073" SSH_AGENT_PID="105074" DOCKER_HOST=ssh://docker@127.0.0.1:32778 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (1.144922491s)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-3MSm2oxK9FVF/agent.105073" SSH_AGENT_PID="105074" DOCKER_HOST=ssh://docker@127.0.0.1:32778 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:629: *** TestDockerEnvContainerd FAILED at 2025-02-10 12:42:51.635592766 +0000 UTC m=+630.792759910
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect dockerenv-014550
helpers_test.go:235: (dbg) docker inspect dockerenv-014550:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4175ba7c793378f063b29113973a1e7ff9df17e1582d652d826bfe806624ff34",
	        "Created": "2025-02-10T12:42:23.785809972Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102326,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-10T12:42:23.893166367Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/4175ba7c793378f063b29113973a1e7ff9df17e1582d652d826bfe806624ff34/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4175ba7c793378f063b29113973a1e7ff9df17e1582d652d826bfe806624ff34/hostname",
	        "HostsPath": "/var/lib/docker/containers/4175ba7c793378f063b29113973a1e7ff9df17e1582d652d826bfe806624ff34/hosts",
	        "LogPath": "/var/lib/docker/containers/4175ba7c793378f063b29113973a1e7ff9df17e1582d652d826bfe806624ff34/4175ba7c793378f063b29113973a1e7ff9df17e1582d652d826bfe806624ff34-json.log",
	        "Name": "/dockerenv-014550",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-014550:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "dockerenv-014550",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c1ea24105ba1ff14d45101029c456ccbc3ef79f0c99fba67669ea6c4d6cf444-init/diff:/var/lib/docker/overlay2/9ffca27f7ebed742e3d0dd8f2061c1044c6b8fc8f60ace2c8ab1f353604acf23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c1ea24105ba1ff14d45101029c456ccbc3ef79f0c99fba67669ea6c4d6cf444/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c1ea24105ba1ff14d45101029c456ccbc3ef79f0c99fba67669ea6c4d6cf444/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c1ea24105ba1ff14d45101029c456ccbc3ef79f0c99fba67669ea6c4d6cf444/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-014550",
	                "Source": "/var/lib/docker/volumes/dockerenv-014550/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-014550",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-014550",
	                "name.minikube.sigs.k8s.io": "dockerenv-014550",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3f7f6251b7d48c8909b07c3c9685ddd4c3d519eaca96da196c253ca594511a1f",
	            "SandboxKey": "/var/run/docker/netns/3f7f6251b7d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-014550": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "db6d77bdeddb9ada2e946f65fd393284d196f581772274182c5c11bba446df01",
	                    "EndpointID": "d58fecab06a84b2e8b43ddb48be3f4e31a8aebc2daea2f0b6fd0d8da3556938c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-014550",
	                        "4175ba7c7933"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p dockerenv-014550 -n dockerenv-014550
helpers_test.go:244: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p dockerenv-014550 logs -n 25
helpers_test.go:252: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------|------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|------------------|---------|---------|---------------------|---------------------|
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|            | gcp-auth --alsologtostderr     |                  |         |         |                     |                     |
	|            | -v=1                           |                  |         |         |                     |                     |
	| addons     | enable headlamp                | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|            | -p addons-444927               |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| addons     | addons-444927 addons           | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|            | disable nvidia-device-plugin   |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| addons     | addons-444927 addons           | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:35 UTC |
	|            | disable metrics-server         |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:35 UTC | 10 Feb 25 12:36 UTC |
	|            | headlamp --alsologtostderr     |                  |         |         |                     |                     |
	|            | -v=1                           |                  |         |         |                     |                     |
	| addons     | addons-444927 addons           | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|            | disable cloud-spanner          |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| ip         | addons-444927 ip               | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|            | registry --alsologtostderr     |                  |         |         |                     |                     |
	|            | -v=1                           |                  |         |         |                     |                     |
	| addons     | addons-444927 addons           | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|            | disable inspektor-gadget       |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|            | amd-gpu-device-plugin          |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:36 UTC | 10 Feb 25 12:36 UTC |
	|            | yakd --alsologtostderr -v=1    |                  |         |         |                     |                     |
	| addons     | addons-444927 addons           | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:37 UTC | 10 Feb 25 12:37 UTC |
	|            | disable volumesnapshots        |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| addons     | addons-444927 addons           | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:37 UTC | 10 Feb 25 12:37 UTC |
	|            | disable csi-hostpath-driver    |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:39 UTC | 10 Feb 25 12:39 UTC |
	|            | storage-provisioner-rancher    |                  |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                  |         |         |                     |                     |
	| ssh        | addons-444927 ssh curl -s      | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:41 UTC | 10 Feb 25 12:41 UTC |
	|            | http://127.0.0.1/ -H 'Host:    |                  |         |         |                     |                     |
	|            | nginx.example.com'             |                  |         |         |                     |                     |
	| ip         | addons-444927 ip               | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:41 UTC | 10 Feb 25 12:41 UTC |
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:41 UTC | 10 Feb 25 12:41 UTC |
	|            | ingress-dns --alsologtostderr  |                  |         |         |                     |                     |
	|            | -v=1                           |                  |         |         |                     |                     |
	| addons     | addons-444927 addons disable   | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:41 UTC | 10 Feb 25 12:42 UTC |
	|            | ingress --alsologtostderr -v=1 |                  |         |         |                     |                     |
	| stop       | -p addons-444927               | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:42 UTC | 10 Feb 25 12:42 UTC |
	| addons     | enable dashboard -p            | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:42 UTC | 10 Feb 25 12:42 UTC |
	|            | addons-444927                  |                  |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:42 UTC | 10 Feb 25 12:42 UTC |
	|            | addons-444927                  |                  |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:42 UTC | 10 Feb 25 12:42 UTC |
	|            | addons-444927                  |                  |         |         |                     |                     |
	| delete     | -p addons-444927               | addons-444927    | jenkins | v1.35.0 | 10 Feb 25 12:42 UTC | 10 Feb 25 12:42 UTC |
	| start      | -p dockerenv-014550            | dockerenv-014550 | jenkins | v1.35.0 | 10 Feb 25 12:42 UTC | 10 Feb 25 12:42 UTC |
	|            | --driver=docker                |                  |         |         |                     |                     |
	|            | --container-runtime=containerd |                  |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-014550 | jenkins | v1.35.0 | 10 Feb 25 12:42 UTC | 10 Feb 25 12:42 UTC |
	|            | dockerenv-014550               |                  |         |         |                     |                     |
	|------------|--------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:42:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:42:18.308369  101660 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:42:18.308460  101660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:42:18.308463  101660 out.go:358] Setting ErrFile to fd 2...
	I0210 12:42:18.308522  101660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:42:18.308712  101660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:42:18.309299  101660 out.go:352] Setting JSON to false
	I0210 12:42:18.310158  101660 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12287,"bootTime":1739179051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:42:18.310259  101660 start.go:139] virtualization: kvm guest
	I0210 12:42:18.312911  101660 out.go:177] * [dockerenv-014550] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:42:18.314392  101660 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:42:18.314428  101660 notify.go:220] Checking for updates...
	I0210 12:42:18.317217  101660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:42:18.318771  101660 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:42:18.320354  101660 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:42:18.321619  101660 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:42:18.323128  101660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:42:18.324824  101660 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:42:18.347816  101660 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:42:18.347904  101660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:42:18.396102  101660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2025-02-10 12:42:18.387058887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:42:18.396202  101660 docker.go:318] overlay module found
	I0210 12:42:18.398379  101660 out.go:177] * Using the docker driver based on user configuration
	I0210 12:42:18.400051  101660 start.go:297] selected driver: docker
	I0210 12:42:18.400061  101660 start.go:901] validating driver "docker" against <nil>
	I0210 12:42:18.400072  101660 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:42:18.400181  101660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:42:18.448710  101660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2025-02-10 12:42:18.439873731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:42:18.448876  101660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:42:18.449405  101660 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0210 12:42:18.449537  101660 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 12:42:18.451405  101660 out.go:177] * Using Docker driver with root privileges
	I0210 12:42:18.452815  101660 cni.go:84] Creating CNI manager for ""
	I0210 12:42:18.452876  101660 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 12:42:18.452884  101660 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 12:42:18.452953  101660 start.go:340] cluster config:
	{Name:dockerenv-014550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:dockerenv-014550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:42:18.454579  101660 out.go:177] * Starting "dockerenv-014550" primary control-plane node in "dockerenv-014550" cluster
	I0210 12:42:18.455929  101660 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0210 12:42:18.457244  101660 out.go:177] * Pulling base image v0.0.46 ...
	I0210 12:42:18.458484  101660 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 12:42:18.458517  101660 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
	I0210 12:42:18.458530  101660 cache.go:56] Caching tarball of preloaded images
	I0210 12:42:18.458586  101660 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0210 12:42:18.458631  101660 preload.go:172] Found /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:42:18.458638  101660 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
	I0210 12:42:18.459001  101660 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/config.json ...
	I0210 12:42:18.459019  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/config.json: {Name:mk2cb8f8c9fc073ea07cb3c0274f6b72765f1949 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:18.478561  101660 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
	I0210 12:42:18.478587  101660 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
	I0210 12:42:18.478615  101660 cache.go:230] Successfully downloaded all kic artifacts
	I0210 12:42:18.478641  101660 start.go:360] acquireMachinesLock for dockerenv-014550: {Name:mk339c395b42fc3d1c970bf1d5bfaa94c81c0889 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:42:18.478735  101660 start.go:364] duration metric: took 80.023µs to acquireMachinesLock for "dockerenv-014550"
	I0210 12:42:18.478758  101660 start.go:93] Provisioning new machine with config: &{Name:dockerenv-014550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:dockerenv-014550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0210 12:42:18.478839  101660 start.go:125] createHost starting for "" (driver="docker")
	I0210 12:42:18.481049  101660 out.go:235] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I0210 12:42:18.481241  101660 start.go:159] libmachine.API.Create for "dockerenv-014550" (driver="docker")
	I0210 12:42:18.481272  101660 client.go:168] LocalClient.Create starting
	I0210 12:42:18.481330  101660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem
	I0210 12:42:18.481361  101660 main.go:141] libmachine: Decoding PEM data...
	I0210 12:42:18.481375  101660 main.go:141] libmachine: Parsing certificate...
	I0210 12:42:18.481421  101660 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem
	I0210 12:42:18.481434  101660 main.go:141] libmachine: Decoding PEM data...
	I0210 12:42:18.481444  101660 main.go:141] libmachine: Parsing certificate...
	I0210 12:42:18.481722  101660 cli_runner.go:164] Run: docker network inspect dockerenv-014550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0210 12:42:18.498318  101660 cli_runner.go:211] docker network inspect dockerenv-014550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0210 12:42:18.498376  101660 network_create.go:284] running [docker network inspect dockerenv-014550] to gather additional debugging logs...
	I0210 12:42:18.498390  101660 cli_runner.go:164] Run: docker network inspect dockerenv-014550
	W0210 12:42:18.514163  101660 cli_runner.go:211] docker network inspect dockerenv-014550 returned with exit code 1
	I0210 12:42:18.514183  101660 network_create.go:287] error running [docker network inspect dockerenv-014550]: docker network inspect dockerenv-014550: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-014550 not found
	I0210 12:42:18.514203  101660 network_create.go:289] output of [docker network inspect dockerenv-014550]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-014550 not found
	
	** /stderr **
	I0210 12:42:18.514374  101660 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0210 12:42:18.531801  101660 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b6e9b0}
	I0210 12:42:18.531833  101660 network_create.go:124] attempt to create docker network dockerenv-014550 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0210 12:42:18.531871  101660 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-014550 dockerenv-014550
	I0210 12:42:18.593501  101660 network_create.go:108] docker network dockerenv-014550 192.168.49.0/24 created
	I0210 12:42:18.593519  101660 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-014550" container
	I0210 12:42:18.593596  101660 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0210 12:42:18.610017  101660 cli_runner.go:164] Run: docker volume create dockerenv-014550 --label name.minikube.sigs.k8s.io=dockerenv-014550 --label created_by.minikube.sigs.k8s.io=true
	I0210 12:42:18.628924  101660 oci.go:103] Successfully created a docker volume dockerenv-014550
	I0210 12:42:18.629001  101660 cli_runner.go:164] Run: docker run --rm --name dockerenv-014550-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-014550 --entrypoint /usr/bin/test -v dockerenv-014550:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
	I0210 12:42:19.175755  101660 oci.go:107] Successfully prepared a docker volume dockerenv-014550
	I0210 12:42:19.175796  101660 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 12:42:19.175819  101660 kic.go:194] Starting extracting preloaded images to volume ...
	I0210 12:42:19.175887  101660 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-014550:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
	I0210 12:42:23.724222  101660 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v dockerenv-014550:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (4.548276903s)
	I0210 12:42:23.724247  101660 kic.go:203] duration metric: took 4.548423849s to extract preloaded images to volume ...
	W0210 12:42:23.724577  101660 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0210 12:42:23.724660  101660 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0210 12:42:23.770098  101660 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-014550 --name dockerenv-014550 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-014550 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-014550 --network dockerenv-014550 --ip 192.168.49.2 --volume dockerenv-014550:/var --security-opt apparmor=unconfined --memory=8000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
	I0210 12:42:24.068924  101660 cli_runner.go:164] Run: docker container inspect dockerenv-014550 --format={{.State.Running}}
	I0210 12:42:24.089463  101660 cli_runner.go:164] Run: docker container inspect dockerenv-014550 --format={{.State.Status}}
	I0210 12:42:24.109510  101660 cli_runner.go:164] Run: docker exec dockerenv-014550 stat /var/lib/dpkg/alternatives/iptables
	I0210 12:42:24.150810  101660 oci.go:144] the created container "dockerenv-014550" has a running status.
	I0210 12:42:24.150840  101660 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa...
	I0210 12:42:24.331159  101660 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0210 12:42:24.354622  101660 cli_runner.go:164] Run: docker container inspect dockerenv-014550 --format={{.State.Status}}
	I0210 12:42:24.374873  101660 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0210 12:42:24.374889  101660 kic_runner.go:114] Args: [docker exec --privileged dockerenv-014550 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0210 12:42:24.493932  101660 cli_runner.go:164] Run: docker container inspect dockerenv-014550 --format={{.State.Status}}
	I0210 12:42:24.514981  101660 machine.go:93] provisionDockerMachine start ...
	I0210 12:42:24.515105  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:24.540844  101660 main.go:141] libmachine: Using SSH client type: native
	I0210 12:42:24.541043  101660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0210 12:42:24.541049  101660 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 12:42:24.751844  101660 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-014550
	
	I0210 12:42:24.751878  101660 ubuntu.go:169] provisioning hostname "dockerenv-014550"
	I0210 12:42:24.751929  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:24.771172  101660 main.go:141] libmachine: Using SSH client type: native
	I0210 12:42:24.771345  101660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0210 12:42:24.771353  101660 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-014550 && echo "dockerenv-014550" | sudo tee /etc/hostname
	I0210 12:42:24.911961  101660 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-014550
	
	I0210 12:42:24.912045  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:24.929444  101660 main.go:141] libmachine: Using SSH client type: native
	I0210 12:42:24.929618  101660 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865a00] 0x8686e0 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I0210 12:42:24.929630  101660 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-014550' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-014550/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-014550' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:42:25.060825  101660 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:42:25.060850  101660 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20390-71607/.minikube CaCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20390-71607/.minikube}
	I0210 12:42:25.060882  101660 ubuntu.go:177] setting up certificates
	I0210 12:42:25.060894  101660 provision.go:84] configureAuth start
	I0210 12:42:25.060944  101660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-014550
	I0210 12:42:25.078132  101660 provision.go:143] copyHostCerts
	I0210 12:42:25.078184  101660 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-71607/.minikube/ca.pem, removing ...
	I0210 12:42:25.078191  101660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-71607/.minikube/ca.pem
	I0210 12:42:25.078255  101660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/ca.pem (1082 bytes)
	I0210 12:42:25.078334  101660 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-71607/.minikube/cert.pem, removing ...
	I0210 12:42:25.078338  101660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-71607/.minikube/cert.pem
	I0210 12:42:25.078358  101660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/cert.pem (1123 bytes)
	I0210 12:42:25.078405  101660 exec_runner.go:144] found /home/jenkins/minikube-integration/20390-71607/.minikube/key.pem, removing ...
	I0210 12:42:25.078408  101660 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20390-71607/.minikube/key.pem
	I0210 12:42:25.078426  101660 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20390-71607/.minikube/key.pem (1675 bytes)
	I0210 12:42:25.078465  101660 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem org=jenkins.dockerenv-014550 san=[127.0.0.1 192.168.49.2 dockerenv-014550 localhost minikube]
	I0210 12:42:25.217915  101660 provision.go:177] copyRemoteCerts
	I0210 12:42:25.217964  101660 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:42:25.218001  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:25.235315  101660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa Username:docker}
	I0210 12:42:25.329290  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:42:25.351704  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0210 12:42:25.373543  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 12:42:25.395111  101660 provision.go:87] duration metric: took 334.202244ms to configureAuth
	I0210 12:42:25.395135  101660 ubuntu.go:193] setting minikube options for container-runtime
	I0210 12:42:25.395308  101660 config.go:182] Loaded profile config "dockerenv-014550": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:42:25.395315  101660 machine.go:96] duration metric: took 880.315566ms to provisionDockerMachine
	I0210 12:42:25.395321  101660 client.go:171] duration metric: took 6.914044898s to LocalClient.Create
	I0210 12:42:25.395343  101660 start.go:167] duration metric: took 6.91410248s to libmachine.API.Create "dockerenv-014550"
	I0210 12:42:25.395349  101660 start.go:293] postStartSetup for "dockerenv-014550" (driver="docker")
	I0210 12:42:25.395357  101660 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:42:25.395398  101660 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:42:25.395427  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:25.413037  101660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa Username:docker}
	I0210 12:42:25.505311  101660 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:42:25.508419  101660 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0210 12:42:25.508448  101660 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0210 12:42:25.508455  101660 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0210 12:42:25.508461  101660 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0210 12:42:25.508486  101660 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-71607/.minikube/addons for local assets ...
	I0210 12:42:25.508540  101660 filesync.go:126] Scanning /home/jenkins/minikube-integration/20390-71607/.minikube/files for local assets ...
	I0210 12:42:25.508557  101660 start.go:296] duration metric: took 113.203831ms for postStartSetup
	I0210 12:42:25.508832  101660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-014550
	I0210 12:42:25.527487  101660 profile.go:143] Saving config to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/config.json ...
	I0210 12:42:25.527752  101660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:42:25.527786  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:25.544583  101660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa Username:docker}
	I0210 12:42:25.633521  101660 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0210 12:42:25.638009  101660 start.go:128] duration metric: took 7.159153589s to createHost
	I0210 12:42:25.638028  101660 start.go:83] releasing machines lock for "dockerenv-014550", held for 7.159285477s
	I0210 12:42:25.638098  101660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-014550
	I0210 12:42:25.655221  101660 ssh_runner.go:195] Run: cat /version.json
	I0210 12:42:25.655263  101660 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0210 12:42:25.655275  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:25.655336  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:25.674145  101660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa Username:docker}
	I0210 12:42:25.674833  101660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa Username:docker}
	I0210 12:42:25.835307  101660 ssh_runner.go:195] Run: systemctl --version
	I0210 12:42:25.839429  101660 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 12:42:25.843518  101660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0210 12:42:25.866133  101660 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0210 12:42:25.866210  101660 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:42:25.891221  101660 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0210 12:42:25.891237  101660 start.go:495] detecting cgroup driver to use...
	I0210 12:42:25.891266  101660 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0210 12:42:25.891313  101660 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 12:42:25.902277  101660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:42:25.912431  101660 docker.go:217] disabling cri-docker service (if available) ...
	I0210 12:42:25.912495  101660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0210 12:42:25.924492  101660 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0210 12:42:25.937316  101660 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0210 12:42:26.013160  101660 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0210 12:42:26.092232  101660 docker.go:233] disabling docker service ...
	I0210 12:42:26.092284  101660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0210 12:42:26.111915  101660 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0210 12:42:26.122631  101660 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0210 12:42:26.195597  101660 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0210 12:42:26.272283  101660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0210 12:42:26.282787  101660 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:42:26.298147  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 12:42:26.307439  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 12:42:26.316756  101660 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 12:42:26.316815  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 12:42:26.326316  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:42:26.335716  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 12:42:26.345059  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:42:26.354198  101660 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:42:26.362956  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 12:42:26.371605  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 12:42:26.380240  101660 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 12:42:26.389325  101660 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:42:26.396635  101660 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:42:26.404034  101660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:42:26.472049  101660 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 12:42:26.583185  101660 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0210 12:42:26.583232  101660 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0210 12:42:26.586667  101660 start.go:563] Will wait 60s for crictl version
	I0210 12:42:26.586706  101660 ssh_runner.go:195] Run: which crictl
	I0210 12:42:26.589806  101660 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:42:26.621163  101660 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.24
	RuntimeApiVersion:  v1
	I0210 12:42:26.621211  101660 ssh_runner.go:195] Run: containerd --version
	I0210 12:42:26.643112  101660 ssh_runner.go:195] Run: containerd --version
	I0210 12:42:26.667578  101660 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
	I0210 12:42:26.668759  101660 cli_runner.go:164] Run: docker network inspect dockerenv-014550 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0210 12:42:26.686154  101660 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0210 12:42:26.689756  101660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:42:26.699974  101660 kubeadm.go:883] updating cluster {Name:dockerenv-014550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:dockerenv-014550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 12:42:26.700119  101660 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
	I0210 12:42:26.700175  101660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:42:26.731133  101660 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 12:42:26.731148  101660 containerd.go:534] Images already preloaded, skipping extraction
	I0210 12:42:26.731215  101660 ssh_runner.go:195] Run: sudo crictl images --output json
	I0210 12:42:26.763401  101660 containerd.go:627] all images are preloaded for containerd runtime.
	I0210 12:42:26.763414  101660 cache_images.go:84] Images are preloaded, skipping loading
	I0210 12:42:26.763420  101660 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.1 containerd true true} ...
	I0210 12:42:26.763502  101660 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-014550 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:dockerenv-014550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:42:26.763546  101660 ssh_runner.go:195] Run: sudo crictl info
	I0210 12:42:26.795559  101660 cni.go:84] Creating CNI manager for ""
	I0210 12:42:26.795570  101660 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 12:42:26.795579  101660 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 12:42:26.795598  101660 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-014550 NodeName:dockerenv-014550 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 12:42:26.795702  101660 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-014550"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 12:42:26.795751  101660 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:42:26.803844  101660 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:42:26.803895  101660 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 12:42:26.811726  101660 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0210 12:42:26.827703  101660 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:42:26.844148  101660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2306 bytes)
	I0210 12:42:26.859981  101660 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0210 12:42:26.863099  101660 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:42:26.873046  101660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:42:26.944448  101660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:42:26.956517  101660 certs.go:68] Setting up /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550 for IP: 192.168.49.2
	I0210 12:42:26.956539  101660 certs.go:194] generating shared ca certs ...
	I0210 12:42:26.956562  101660 certs.go:226] acquiring lock for ca certs: {Name:mked3bdcf754b16a474f1226f12a3cc337a7b998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:26.956736  101660 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key
	I0210 12:42:26.956784  101660 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key
	I0210 12:42:26.956792  101660 certs.go:256] generating profile certs ...
	I0210 12:42:26.956853  101660 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/client.key
	I0210 12:42:26.956866  101660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/client.crt with IP's: []
	I0210 12:42:27.203918  101660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/client.crt ...
	I0210 12:42:27.203934  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/client.crt: {Name:mk5d030853fef7141c4e000b4e73cfedc1c6a7f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:27.204123  101660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/client.key ...
	I0210 12:42:27.204129  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/client.key: {Name:mkf2672f8e41768ef8eae0fe352d1668fa3025e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:27.204209  101660 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.key.efac82d9
	I0210 12:42:27.204219  101660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.crt.efac82d9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0210 12:42:27.395249  101660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.crt.efac82d9 ...
	I0210 12:42:27.395271  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.crt.efac82d9: {Name:mk8af833ee468279def7d7a65086207cbf68194f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:27.395444  101660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.key.efac82d9 ...
	I0210 12:42:27.395463  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.key.efac82d9: {Name:mkdb41e15c8e7674aa2719d1df722d1185b74e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:27.395541  101660 certs.go:381] copying /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.crt.efac82d9 -> /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.crt
	I0210 12:42:27.395619  101660 certs.go:385] copying /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.key.efac82d9 -> /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.key
	I0210 12:42:27.395668  101660 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.key
	I0210 12:42:27.395678  101660 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.crt with IP's: []
	I0210 12:42:27.530272  101660 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.crt ...
	I0210 12:42:27.530289  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.crt: {Name:mkd4cc665022cb985f4700d652a57145cf047dfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:27.530454  101660 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.key ...
	I0210 12:42:27.530462  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.key: {Name:mkd8a7951ab057476c05e18ac3e1cdba1e1e5f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:27.530631  101660 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca-key.pem (1679 bytes)
	I0210 12:42:27.530662  101660 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/ca.pem (1082 bytes)
	I0210 12:42:27.530700  101660 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/cert.pem (1123 bytes)
	I0210 12:42:27.530721  101660 certs.go:484] found cert: /home/jenkins/minikube-integration/20390-71607/.minikube/certs/key.pem (1675 bytes)
	I0210 12:42:27.531323  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:42:27.553285  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:42:27.574690  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:42:27.596197  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 12:42:27.617354  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 12:42:27.639764  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 12:42:27.661891  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 12:42:27.683201  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/dockerenv-014550/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 12:42:27.704597  101660 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:42:27.726454  101660 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 12:42:27.743034  101660 ssh_runner.go:195] Run: openssl version
	I0210 12:42:27.748055  101660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:42:27.757481  101660 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:42:27.761142  101660 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 12:33 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:42:27.761189  101660 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:42:27.767952  101660 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:42:27.777647  101660 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:42:27.780878  101660 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:42:27.780919  101660 kubeadm.go:392] StartCluster: {Name:dockerenv-014550 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:dockerenv-014550 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:42:27.780993  101660 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0210 12:42:27.781045  101660 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0210 12:42:27.813494  101660 cri.go:89] found id: ""
	I0210 12:42:27.813549  101660 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 12:42:27.822421  101660 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 12:42:27.830578  101660 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0210 12:42:27.830620  101660 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 12:42:27.838629  101660 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:42:27.838639  101660 kubeadm.go:157] found existing configuration files:
	
	I0210 12:42:27.838686  101660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 12:42:27.846608  101660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:42:27.846652  101660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 12:42:27.854369  101660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 12:42:27.862313  101660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:42:27.862367  101660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 12:42:27.869780  101660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 12:42:27.877477  101660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:42:27.877516  101660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 12:42:27.885224  101660 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 12:42:27.892923  101660 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:42:27.892962  101660 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 12:42:27.900364  101660 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0210 12:42:27.952104  101660 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0210 12:42:27.952276  101660 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0210 12:42:28.003929  101660 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 12:42:36.924688  101660 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 12:42:36.924749  101660 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 12:42:36.924834  101660 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0210 12:42:36.924876  101660 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0210 12:42:36.924901  101660 kubeadm.go:310] OS: Linux
	I0210 12:42:36.924934  101660 kubeadm.go:310] CGROUPS_CPU: enabled
	I0210 12:42:36.924980  101660 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0210 12:42:36.925014  101660 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0210 12:42:36.925049  101660 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0210 12:42:36.925084  101660 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0210 12:42:36.925139  101660 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0210 12:42:36.925185  101660 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0210 12:42:36.925221  101660 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0210 12:42:36.925254  101660 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0210 12:42:36.925315  101660 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 12:42:36.925396  101660 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 12:42:36.925467  101660 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 12:42:36.925514  101660 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 12:42:36.927011  101660 out.go:235]   - Generating certificates and keys ...
	I0210 12:42:36.927080  101660 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 12:42:36.927129  101660 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 12:42:36.927178  101660 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 12:42:36.927223  101660 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 12:42:36.927268  101660 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 12:42:36.927305  101660 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 12:42:36.927343  101660 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 12:42:36.927433  101660 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [dockerenv-014550 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0210 12:42:36.927474  101660 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 12:42:36.927615  101660 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-014550 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0210 12:42:36.927716  101660 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 12:42:36.927794  101660 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 12:42:36.927841  101660 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 12:42:36.927904  101660 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 12:42:36.927946  101660 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 12:42:36.928006  101660 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 12:42:36.928047  101660 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 12:42:36.928093  101660 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 12:42:36.928133  101660 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 12:42:36.928197  101660 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 12:42:36.928254  101660 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 12:42:36.929903  101660 out.go:235]   - Booting up control plane ...
	I0210 12:42:36.929990  101660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 12:42:36.930079  101660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 12:42:36.930154  101660 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 12:42:36.930286  101660 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:42:36.930408  101660 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:42:36.930446  101660 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 12:42:36.930554  101660 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 12:42:36.930646  101660 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 12:42:36.930709  101660 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001806838s
	I0210 12:42:36.930802  101660 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 12:42:36.930853  101660 kubeadm.go:310] [api-check] The API server is healthy after 4.001190098s
	I0210 12:42:36.930946  101660 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 12:42:36.931054  101660 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 12:42:36.931097  101660 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 12:42:36.931233  101660 kubeadm.go:310] [mark-control-plane] Marking the node dockerenv-014550 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 12:42:36.931279  101660 kubeadm.go:310] [bootstrap-token] Using token: af3jg4.err9buy0pi6thsps
	I0210 12:42:36.932685  101660 out.go:235]   - Configuring RBAC rules ...
	I0210 12:42:36.932767  101660 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 12:42:36.932836  101660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 12:42:36.932960  101660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 12:42:36.933112  101660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 12:42:36.933266  101660 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 12:42:36.933377  101660 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 12:42:36.933534  101660 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 12:42:36.933591  101660 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 12:42:36.933651  101660 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 12:42:36.933659  101660 kubeadm.go:310] 
	I0210 12:42:36.933766  101660 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 12:42:36.933773  101660 kubeadm.go:310] 
	I0210 12:42:36.933830  101660 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 12:42:36.933832  101660 kubeadm.go:310] 
	I0210 12:42:36.933851  101660 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 12:42:36.933894  101660 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 12:42:36.933935  101660 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 12:42:36.933937  101660 kubeadm.go:310] 
	I0210 12:42:36.933979  101660 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 12:42:36.933981  101660 kubeadm.go:310] 
	I0210 12:42:36.934016  101660 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 12:42:36.934018  101660 kubeadm.go:310] 
	I0210 12:42:36.934084  101660 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 12:42:36.934158  101660 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 12:42:36.934208  101660 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 12:42:36.934211  101660 kubeadm.go:310] 
	I0210 12:42:36.934274  101660 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 12:42:36.934338  101660 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 12:42:36.934341  101660 kubeadm.go:310] 
	I0210 12:42:36.934403  101660 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token af3jg4.err9buy0pi6thsps \
	I0210 12:42:36.934480  101660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a04e7adba77e55f6c403d6b6702c62e468700cf463ec68bf30f3cb8b7b5deb33 \
	I0210 12:42:36.934495  101660 kubeadm.go:310] 	--control-plane 
	I0210 12:42:36.934497  101660 kubeadm.go:310] 
	I0210 12:42:36.934579  101660 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 12:42:36.934582  101660 kubeadm.go:310] 
	I0210 12:42:36.934651  101660 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token af3jg4.err9buy0pi6thsps \
	I0210 12:42:36.934747  101660 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a04e7adba77e55f6c403d6b6702c62e468700cf463ec68bf30f3cb8b7b5deb33 
	I0210 12:42:36.934754  101660 cni.go:84] Creating CNI manager for ""
	I0210 12:42:36.934760  101660 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 12:42:36.936204  101660 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0210 12:42:36.937513  101660 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 12:42:36.941272  101660 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 12:42:36.941281  101660 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0210 12:42:36.958840  101660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 12:42:37.156068  101660 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 12:42:37.156145  101660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 12:42:37.156181  101660 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-014550 minikube.k8s.io/updated_at=2025_02_10T12_42_37_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04 minikube.k8s.io/name=dockerenv-014550 minikube.k8s.io/primary=true
	I0210 12:42:37.294154  101660 ops.go:34] apiserver oom_adj: -16
	I0210 12:42:37.294213  101660 kubeadm.go:1113] duration metric: took 138.135131ms to wait for elevateKubeSystemPrivileges
	I0210 12:42:37.294242  101660 kubeadm.go:394] duration metric: took 9.513326672s to StartCluster
	I0210 12:42:37.294259  101660 settings.go:142] acquiring lock: {Name:mk48700407fa7ae208a78ae38cd1ed6c94166a30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:37.294324  101660 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:42:37.294959  101660 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20390-71607/kubeconfig: {Name:mk5db87da690cfc2ed8765dd4558179e05f09057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:42:37.295183  101660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 12:42:37.295180  101660 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0210 12:42:37.295262  101660 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 12:42:37.295354  101660 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-014550"
	I0210 12:42:37.295371  101660 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-014550"
	I0210 12:42:37.295389  101660 addons.go:69] Setting default-storageclass=true in profile "dockerenv-014550"
	I0210 12:42:37.295398  101660 config.go:182] Loaded profile config "dockerenv-014550": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:42:37.295405  101660 host.go:66] Checking if "dockerenv-014550" exists ...
	I0210 12:42:37.295406  101660 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-014550"
	I0210 12:42:37.295774  101660 cli_runner.go:164] Run: docker container inspect dockerenv-014550 --format={{.State.Status}}
	I0210 12:42:37.295908  101660 cli_runner.go:164] Run: docker container inspect dockerenv-014550 --format={{.State.Status}}
	I0210 12:42:37.297089  101660 out.go:177] * Verifying Kubernetes components...
	I0210 12:42:37.298490  101660 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:42:37.319101  101660 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:42:37.320783  101660 addons.go:238] Setting addon default-storageclass=true in "dockerenv-014550"
	I0210 12:42:37.320826  101660 host.go:66] Checking if "dockerenv-014550" exists ...
	I0210 12:42:37.321298  101660 cli_runner.go:164] Run: docker container inspect dockerenv-014550 --format={{.State.Status}}
	I0210 12:42:37.321714  101660 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:42:37.321726  101660 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 12:42:37.321785  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:37.346529  101660 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 12:42:37.346544  101660 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 12:42:37.346602  101660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-014550
	I0210 12:42:37.346853  101660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa Username:docker}
	I0210 12:42:37.365436  101660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/dockerenv-014550/id_rsa Username:docker}
	I0210 12:42:37.519149  101660 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 12:42:37.528737  101660 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:42:37.647267  101660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 12:42:37.701282  101660 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 12:42:37.886532  101660 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0210 12:42:37.887468  101660 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:42:37.887521  101660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:42:38.072081  101660 api_server.go:72] duration metric: took 776.876173ms to wait for apiserver process to appear ...
	I0210 12:42:38.072170  101660 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:42:38.072191  101660 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0210 12:42:38.074297  101660 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0210 12:42:38.075488  101660 addons.go:514] duration metric: took 780.230625ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0210 12:42:38.077101  101660 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0210 12:42:38.077942  101660 api_server.go:141] control plane version: v1.32.1
	I0210 12:42:38.077958  101660 api_server.go:131] duration metric: took 5.779494ms to wait for apiserver health ...
	I0210 12:42:38.077966  101660 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:42:38.080191  101660 system_pods.go:59] 5 kube-system pods found
	I0210 12:42:38.080212  101660 system_pods.go:61] "etcd-dockerenv-014550" [8fdb454b-29dc-46af-9517-b7444a6e0c63] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 12:42:38.080221  101660 system_pods.go:61] "kube-apiserver-dockerenv-014550" [97dc1bc3-a28e-4944-8afe-e63482726b0c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 12:42:38.080230  101660 system_pods.go:61] "kube-controller-manager-dockerenv-014550" [5faebcde-7369-4081-a4e8-2837c331b59b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 12:42:38.080235  101660 system_pods.go:61] "kube-scheduler-dockerenv-014550" [4148adea-347b-411d-9d00-82d1c176d834] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 12:42:38.080241  101660 system_pods.go:61] "storage-provisioner" [14621558-7932-4c2c-9289-cb389c2c71f9] Pending
	I0210 12:42:38.080247  101660 system_pods.go:74] duration metric: took 2.275868ms to wait for pod list to return data ...
	I0210 12:42:38.080259  101660 kubeadm.go:582] duration metric: took 785.056262ms to wait for: map[apiserver:true system_pods:true]
	I0210 12:42:38.080273  101660 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:42:38.082275  101660 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0210 12:42:38.082288  101660 node_conditions.go:123] node cpu capacity is 8
	I0210 12:42:38.082298  101660 node_conditions.go:105] duration metric: took 2.021873ms to run NodePressure ...
	I0210 12:42:38.082309  101660 start.go:241] waiting for startup goroutines ...
	I0210 12:42:38.389843  101660 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-014550" context rescaled to 1 replicas
	I0210 12:42:38.389870  101660 start.go:246] waiting for cluster config update ...
	I0210 12:42:38.389879  101660 start.go:255] writing updated cluster config ...
	I0210 12:42:38.390161  101660 ssh_runner.go:195] Run: rm -f paused
	I0210 12:42:38.438429  101660 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 12:42:38.440303  101660 out.go:177] * Done! kubectl is now configured to use "dockerenv-014550" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf2c7c1cdc461       d300845f67aeb       8 seconds ago       Running             kindnet-cni               0                   b5b5d9fea8155       kindnet-jsvdc
	e96634655adba       e29f9c7391fd9       10 seconds ago      Running             kube-proxy                0                   a228e5f5a19a4       kube-proxy-pbh7k
	1bd1ce2045585       6e38f40d628db       10 seconds ago      Running             storage-provisioner       0                   756e7516e9eb7       storage-provisioner
	7fad586eec65c       2b0d6572d062c       20 seconds ago      Running             kube-scheduler            0                   f2d95b131e1ae       kube-scheduler-dockerenv-014550
	9dae629de47f1       019ee182b58e2       20 seconds ago      Running             kube-controller-manager   0                   31bcb7fa4e662       kube-controller-manager-dockerenv-014550
	62fb54a4393ab       95c0bda56fc4d       20 seconds ago      Running             kube-apiserver            0                   4b4ebe5e8e218       kube-apiserver-dockerenv-014550
	7117c33a93db9       a9e7e6b294baf       20 seconds ago      Running             etcd                      0                   3a8812a15811f       etcd-dockerenv-014550
	
	
	==> containerd <==
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.705423393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.705433334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.705515501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.734467356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbh7k,Uid:b6ce93f9-bc6e-4c64-956c-ff64bf836548,Namespace:kube-system,Attempt:0,} returns sandbox id \"a228e5f5a19a4b3350523c5d10dd5b10288b443a403d69bfdc31eee92f3897ae\""
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.737327600Z" level=info msg="CreateContainer within sandbox \"a228e5f5a19a4b3350523c5d10dd5b10288b443a403d69bfdc31eee92f3897ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.790333567Z" level=info msg="CreateContainer within sandbox \"a228e5f5a19a4b3350523c5d10dd5b10288b443a403d69bfdc31eee92f3897ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e96634655adba68e0b5d268498919979d4edb0d0dfd08bb2952bc8fb4a2ccc3f\""
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.791042431Z" level=info msg="StartContainer for \"e96634655adba68e0b5d268498919979d4edb0d0dfd08bb2952bc8fb4a2ccc3f\""
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.843240353Z" level=info msg="StartContainer for \"e96634655adba68e0b5d268498919979d4edb0d0dfd08bb2952bc8fb4a2ccc3f\" returns successfully"
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.975487723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9k5d,Uid:2f3500ba-c486-4e15-9376-5119d4d37f67,Namespace:kube-system,Attempt:0,}"
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.986057137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-jsvdc,Uid:e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5b5d9fea8155864690a178bb6d599938feaa1663bee406e9bc7766885dc228d\""
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.988129259Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20241212-9f82dd49\""
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.989530508Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:42:41 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:41.996668352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-b9k5d,Uid:2f3500ba-c486-4e15-9376-5119d4d37f67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\": failed to find network info for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\""
	Feb 10 12:42:42 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:42.286924261Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.686974746Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd:v20241212-9f82dd49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.687831547Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20241212-9f82dd49: active requests=0, bytes read=27533118"
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.689218540Z" level=info msg="ImageCreate event name:\"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.691440807Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.692012172Z" level=info msg="Pulled image \"docker.io/kindest/kindnetd:v20241212-9f82dd49\" with image id \"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56\", repo tag \"docker.io/kindest/kindnetd:v20241212-9f82dd49\", repo digest \"docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26\", size \"39008320\" in 1.703835356s"
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.692050949Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20241212-9f82dd49\" returns image reference \"sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56\""
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.694379555Z" level=info msg="CreateContainer within sandbox \"b5b5d9fea8155864690a178bb6d599938feaa1663bee406e9bc7766885dc228d\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.706203000Z" level=info msg="CreateContainer within sandbox \"b5b5d9fea8155864690a178bb6d599938feaa1663bee406e9bc7766885dc228d\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"cf2c7c1cdc4614ca40a74fc4fa05bd18d916e25d5c3d07b266bb62b0427c36c1\""
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.706727264Z" level=info msg="StartContainer for \"cf2c7c1cdc4614ca40a74fc4fa05bd18d916e25d5c3d07b266bb62b0427c36c1\""
	Feb 10 12:42:43 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:43.797260493Z" level=info msg="StartContainer for \"cf2c7c1cdc4614ca40a74fc4fa05bd18d916e25d5c3d07b266bb62b0427c36c1\" returns successfully"
	Feb 10 12:42:46 dockerenv-014550 containerd[858]: time="2025-02-10T12:42:46.770407920Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	
	
	==> describe nodes <==
	Name:               dockerenv-014550
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=dockerenv-014550
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04
	                    minikube.k8s.io/name=dockerenv-014550
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T12_42_37_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:42:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-014550
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:42:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:42:46 +0000   Mon, 10 Feb 2025 12:42:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:42:46 +0000   Mon, 10 Feb 2025 12:42:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:42:46 +0000   Mon, 10 Feb 2025 12:42:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:42:46 +0000   Mon, 10 Feb 2025 12:42:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-014550
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 71076ffb32c64d0194ae340d0dc8c3d7
	  System UUID:                5cc55aa8-b9e1-49e6-ad71-bedb3473e9f0
	  Boot ID:                    1d7cad77-75d7-418d-a590-e8096751a144
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-b9k5d                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11s
	  kube-system                 etcd-dockerenv-014550                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16s
	  kube-system                 kindnet-jsvdc                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11s
	  kube-system                 kube-apiserver-dockerenv-014550             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-controller-manager-dockerenv-014550    200m (2%)     0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-pbh7k                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 kube-scheduler-dockerenv-014550             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 10s   kube-proxy       
	  Normal   Starting                 16s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  16s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16s   kubelet          Node dockerenv-014550 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s   kubelet          Node dockerenv-014550 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s   kubelet          Node dockerenv-014550 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12s   node-controller  Node dockerenv-014550 event: Registered Node dockerenv-014550 in Controller
	
	
	==> dmesg <==
	[Feb10 09:17]  #2
	[  +0.001427]  #3
	[  +0.000000]  #4
	[  +0.003161] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003164] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002021] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002123]  #5
	[  +0.000751]  #6
	[  +0.000811]  #7
	[  +0.060730] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.448106] i8042: Warning: Keylock active
	[  +0.009792] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004111] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001792] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.002113] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001740] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.629359] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026636] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.129242] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [7117c33a93db922de576ff59cb0a8bda8d4862d66ef633f105da7aa9da7818c2] <==
	{"level":"info","ts":"2025-02-10T12:42:31.906613Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-10T12:42:31.906762Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:42:31.906795Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:42:31.906956Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-10T12:42:31.906997Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-10T12:42:32.194413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-02-10T12:42:32.194463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-02-10T12:42:32.194500Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-02-10T12:42:32.194524Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-02-10T12:42:32.194537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-10T12:42:32.194550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-02-10T12:42:32.194563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-10T12:42:32.195428Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:42:32.196160Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:dockerenv-014550 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T12:42:32.196192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:42:32.196222Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:42:32.196454Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T12:42:32.196551Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T12:42:32.196591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:42:32.196684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:42:32.196710Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:42:32.197114Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:42:32.197193Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:42:32.198052Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T12:42:32.198085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 12:42:52 up  3:25,  0 users,  load average: 1.26, 0.58, 0.33
	Linux dockerenv-014550 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [cf2c7c1cdc4614ca40a74fc4fa05bd18d916e25d5c3d07b266bb62b0427c36c1] <==
	I0210 12:42:43.984950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:42:43.985270       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0210 12:42:43.985439       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:42:43.985464       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:42:43.985492       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:42:44.385340       1 controller.go:361] Starting controller kube-network-policies
	I0210 12:42:44.385375       1 controller.go:365] Waiting for informer caches to sync
	I0210 12:42:44.385383       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0210 12:42:44.685567       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0210 12:42:44.685616       1 metrics.go:61] Registering metrics
	I0210 12:42:44.685676       1 controller.go:401] Syncing nftables rules
	
	
	==> kube-apiserver [62fb54a4393abb46080901667813159f92337b8641dc5713a7d3645304f974dd] <==
	I0210 12:42:34.086903       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:42:34.086911       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:42:34.087037       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:42:34.087873       1 shared_informer.go:320] Caches are synced for node_authorizer
	E0210 12:42:34.091409       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0210 12:42:34.094993       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:42:34.095027       1 policy_source.go:240] refreshing policies
	E0210 12:42:34.140767       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0210 12:42:34.187716       1 controller.go:615] quota admission added evaluator for: namespaces
	I0210 12:42:34.293908       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:42:34.953765       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0210 12:42:34.957499       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0210 12:42:34.957522       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 12:42:35.361606       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:42:35.394059       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:42:35.498744       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0210 12:42:35.504558       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0210 12:42:35.505803       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:42:35.509872       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:42:36.008645       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:42:36.329690       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:42:36.338928       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0210 12:42:36.347461       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:42:41.260163       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:42:41.359865       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9dae629de47f1b047b7441a565c5e302f13f0326ba646c7c7a0b3214aa21e8d6] <==
	I0210 12:42:40.558237       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:42:40.558324       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:42:40.558309       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0210 12:42:40.558760       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:42:40.558863       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:42:40.558934       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:42:40.560496       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:42:40.561875       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:42:40.563058       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:42:40.563071       1 shared_informer.go:320] Caches are synced for node
	I0210 12:42:40.563127       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:42:40.563182       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:42:40.563194       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:42:40.563201       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:42:40.563525       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:42:40.573220       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-014550" podCIDRs=["10.244.0.0/24"]
	I0210 12:42:40.573446       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-014550"
	I0210 12:42:40.573492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-014550"
	I0210 12:42:40.575333       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:42:41.565947       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-014550"
	I0210 12:42:41.673769       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="409.847128ms"
	I0210 12:42:41.680204       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="6.374216ms"
	I0210 12:42:41.680310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.621µs"
	I0210 12:42:41.685771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="91.389µs"
	I0210 12:42:46.778955       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="dockerenv-014550"
	
	
	==> kube-proxy [e96634655adba68e0b5d268498919979d4edb0d0dfd08bb2952bc8fb4a2ccc3f] <==
	I0210 12:42:41.875292       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:42:41.981763       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0210 12:42:41.981828       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:42:42.003364       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0210 12:42:42.003425       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:42:42.005301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:42:42.005681       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:42:42.005710       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:42:42.007257       1 config.go:199] "Starting service config controller"
	I0210 12:42:42.007351       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:42:42.007260       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:42:42.007430       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:42:42.007461       1 config.go:329] "Starting node config controller"
	I0210 12:42:42.007466       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:42:42.108443       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:42:42.108456       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:42:42.108517       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7fad586eec65cb8a6878dd2ffc1fd79d1bb4c5518ad7563f948253bb53845140] <==
	W0210 12:42:34.108127       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 12:42:34.108153       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:34.108220       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 12:42:34.108259       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:34.108227       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0210 12:42:34.108293       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 12:42:34.108314       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0210 12:42:34.108334       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:34.108358       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 12:42:34.108404       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:34.108410       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0210 12:42:34.108434       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:34.108492       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 12:42:34.108531       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:34.937906       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 12:42:34.937947       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 12:42:35.181202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 12:42:35.181248       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:35.187662       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 12:42:35.187707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:35.213348       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 12:42:35.213401       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:42:35.217620       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:42:35.217656       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:42:37.004907       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 12:42:40 dockerenv-014550 kubelet[1590]: I0210 12:42:40.608610    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwbz5\" (UniqueName: \"kubernetes.io/projected/14621558-7932-4c2c-9289-cb389c2c71f9-kube-api-access-wwbz5\") pod \"storage-provisioner\" (UID: \"14621558-7932-4c2c-9289-cb389c2c71f9\") " pod="kube-system/storage-provisioner"
	Feb 10 12:42:40 dockerenv-014550 kubelet[1590]: I0210 12:42:40.608678    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/14621558-7932-4c2c-9289-cb389c2c71f9-tmp\") pod \"storage-provisioner\" (UID: \"14621558-7932-4c2c-9289-cb389c2c71f9\") " pod="kube-system/storage-provisioner"
	Feb 10 12:42:40 dockerenv-014550 kubelet[1590]: E0210 12:42:40.714606    1590 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Feb 10 12:42:40 dockerenv-014550 kubelet[1590]: E0210 12:42:40.714646    1590 projected.go:194] Error preparing data for projected volume kube-api-access-wwbz5 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Feb 10 12:42:40 dockerenv-014550 kubelet[1590]: E0210 12:42:40.714718    1590 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/14621558-7932-4c2c-9289-cb389c2c71f9-kube-api-access-wwbz5 podName:14621558-7932-4c2c-9289-cb389c2c71f9 nodeName:}" failed. No retries permitted until 2025-02-10 12:42:41.214695231 +0000 UTC m=+5.111382209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwbz5" (UniqueName: "kubernetes.io/projected/14621558-7932-4c2c-9289-cb389c2c71f9-kube-api-access-wwbz5") pod "storage-provisioner" (UID: "14621558-7932-4c2c-9289-cb389c2c71f9") : configmap "kube-root-ca.crt" not found
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.313825    1590 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.413787    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cmq4\" (UniqueName: \"kubernetes.io/projected/e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5-kube-api-access-5cmq4\") pod \"kindnet-jsvdc\" (UID: \"e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5\") " pod="kube-system/kindnet-jsvdc"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.413846    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6ce93f9-bc6e-4c64-956c-ff64bf836548-xtables-lock\") pod \"kube-proxy-pbh7k\" (UID: \"b6ce93f9-bc6e-4c64-956c-ff64bf836548\") " pod="kube-system/kube-proxy-pbh7k"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.413887    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5-cni-cfg\") pod \"kindnet-jsvdc\" (UID: \"e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5\") " pod="kube-system/kindnet-jsvdc"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.413910    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6ce93f9-bc6e-4c64-956c-ff64bf836548-lib-modules\") pod \"kube-proxy-pbh7k\" (UID: \"b6ce93f9-bc6e-4c64-956c-ff64bf836548\") " pod="kube-system/kube-proxy-pbh7k"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.414000    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5-lib-modules\") pod \"kindnet-jsvdc\" (UID: \"e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5\") " pod="kube-system/kindnet-jsvdc"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.414122    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5-xtables-lock\") pod \"kindnet-jsvdc\" (UID: \"e4afeb5c-9d1a-4f5e-a089-9e1e7601c1d5\") " pod="kube-system/kindnet-jsvdc"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.414165    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6ce93f9-bc6e-4c64-956c-ff64bf836548-kube-proxy\") pod \"kube-proxy-pbh7k\" (UID: \"b6ce93f9-bc6e-4c64-956c-ff64bf836548\") " pod="kube-system/kube-proxy-pbh7k"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.414199    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fxpj\" (UniqueName: \"kubernetes.io/projected/b6ce93f9-bc6e-4c64-956c-ff64bf836548-kube-api-access-9fxpj\") pod \"kube-proxy-pbh7k\" (UID: \"b6ce93f9-bc6e-4c64-956c-ff64bf836548\") " pod="kube-system/kube-proxy-pbh7k"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.716193    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f3500ba-c486-4e15-9376-5119d4d37f67-config-volume\") pod \"coredns-668d6bf9bc-b9k5d\" (UID: \"2f3500ba-c486-4e15-9376-5119d4d37f67\") " pod="kube-system/coredns-668d6bf9bc-b9k5d"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: I0210 12:42:41.716239    1590 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gllf8\" (UniqueName: \"kubernetes.io/projected/2f3500ba-c486-4e15-9376-5119d4d37f67-kube-api-access-gllf8\") pod \"coredns-668d6bf9bc-b9k5d\" (UID: \"2f3500ba-c486-4e15-9376-5119d4d37f67\") " pod="kube-system/coredns-668d6bf9bc-b9k5d"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: E0210 12:42:41.996922    1590 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\": failed to find network info for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\""
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: E0210 12:42:41.997007    1590 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\": failed to find network info for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\"" pod="kube-system/coredns-668d6bf9bc-b9k5d"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: E0210 12:42:41.997029    1590 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\": failed to find network info for sandbox \"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\"" pod="kube-system/coredns-668d6bf9bc-b9k5d"
	Feb 10 12:42:41 dockerenv-014550 kubelet[1590]: E0210 12:42:41.997077    1590 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-b9k5d_kube-system(2f3500ba-c486-4e15-9376-5119d4d37f67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-b9k5d_kube-system(2f3500ba-c486-4e15-9376-5119d4d37f67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\\\": failed to find network info for sandbox \\\"9807eeeb02978f66cb4a6fe5f30ba3e259971871b4437cf03218867e043e07c4\\\"\"" pod="kube-system/coredns-668d6bf9bc-b9k5d" podUID="2f3500ba-c486-4e15-9376-5119d4d37f67"
	Feb 10 12:42:42 dockerenv-014550 kubelet[1590]: I0210 12:42:42.237274    1590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=4.237250245 podStartE2EDuration="4.237250245s" podCreationTimestamp="2025-02-10 12:42:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:42:42.237162176 +0000 UTC m=+6.133849171" watchObservedRunningTime="2025-02-10 12:42:42.237250245 +0000 UTC m=+6.133937249"
	Feb 10 12:42:44 dockerenv-014550 kubelet[1590]: I0210 12:42:44.036275    1590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbh7k" podStartSLOduration=3.036250445 podStartE2EDuration="3.036250445s" podCreationTimestamp="2025-02-10 12:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:42:42.246669932 +0000 UTC m=+6.143356927" watchObservedRunningTime="2025-02-10 12:42:44.036250445 +0000 UTC m=+7.932937439"
	Feb 10 12:42:44 dockerenv-014550 kubelet[1590]: I0210 12:42:44.256040    1590 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-jsvdc" podStartSLOduration=1.550422054 podStartE2EDuration="3.256014346s" podCreationTimestamp="2025-02-10 12:42:41 +0000 UTC" firstStartedPulling="2025-02-10 12:42:41.987528493 +0000 UTC m=+5.884215482" lastFinishedPulling="2025-02-10 12:42:43.693120795 +0000 UTC m=+7.589807774" observedRunningTime="2025-02-10 12:42:44.255980864 +0000 UTC m=+8.152667872" watchObservedRunningTime="2025-02-10 12:42:44.256014346 +0000 UTC m=+8.152701341"
	Feb 10 12:42:46 dockerenv-014550 kubelet[1590]: I0210 12:42:46.769824    1590 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 10 12:42:46 dockerenv-014550 kubelet[1590]: I0210 12:42:46.770717    1590 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	
	==> storage-provisioner [1bd1ce204558549efc072227e01937cc0788cfc382be647a97a906102fcad585] <==
	I0210 12:42:41.603748       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p dockerenv-014550 -n dockerenv-014550
helpers_test.go:261: (dbg) Run:  kubectl --context dockerenv-014550 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: coredns-668d6bf9bc-b9k5d
helpers_test.go:274: ======> post-mortem[TestDockerEnvContainerd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context dockerenv-014550 describe pod coredns-668d6bf9bc-b9k5d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context dockerenv-014550 describe pod coredns-668d6bf9bc-b9k5d: exit status 1 (59.649956ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-668d6bf9bc-b9k5d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context dockerenv-014550 describe pod coredns-668d6bf9bc-b9k5d: exit status 1
helpers_test.go:175: Cleaning up "dockerenv-014550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-014550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-014550: (1.87371512s)
--- FAIL: TestDockerEnvContainerd (36.99s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9ba8dc7e-c5ee-4ce2-8f90-6d30e2cde7f2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002628793s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-644291 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-644291 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-644291 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-644291 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ed8df0ea-e7f0-4638-9dcf-db9225cfd833] Pending
helpers_test.go:344: "sp-pod" [ed8df0ea-e7f0-4638-9dcf-db9225cfd833] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-644291 -n functional-644291
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-02-10 12:48:15.17091215 +0000 UTC m=+954.328079294
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-644291 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-644291 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-644291/192.168.49.2
Start Time:       Mon, 10 Feb 2025 12:45:14 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2nqqk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-2nqqk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  3m                    default-scheduler  Successfully assigned default/sp-pod to functional-644291
Warning  Failed     82s (x4 over 2m59s)   kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     82s (x4 over 2m59s)   kubelet            Error: ErrImagePull
Normal   BackOff    16s (x10 over 2m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     16s (x10 over 2m59s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    1s (x5 over 3m)       kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-644291 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-644291 logs sp-pod -n default: exit status 1 (69.363518ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-644291 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-644291
helpers_test.go:235: (dbg) docker inspect functional-644291:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb",
	        "Created": "2025-02-10T12:43:33.360051285Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 112122,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-10T12:43:33.47439621Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/hosts",
	        "LogPath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb-json.log",
	        "Name": "/functional-644291",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-644291:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-644291",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d-init/diff:/var/lib/docker/overlay2/9ffca27f7ebed742e3d0dd8f2061c1044c6b8fc8f60ace2c8ab1f353604acf23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-644291",
	                "Source": "/var/lib/docker/volumes/functional-644291/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644291",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644291",
	                "name.minikube.sigs.k8s.io": "functional-644291",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0dba4d0eab9542c96779ee5090175871390aebd1d277afc15a4beddb4d24b3bf",
	            "SandboxKey": "/var/run/docker/netns/0dba4d0eab95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644291": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "abd9025709fcbd16ff16a77b6a748d0822a7f329f09e6e731763e49c8db0ebc9",
	                    "EndpointID": "0068133dce7ba462a5b3d3d47c4276fbe7054969024428094665b5a73c1307f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644291",
	                        "d315ad61861b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-644291 -n functional-644291
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-644291 logs -n 25: (1.426094641s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                       Args                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-644291 image save kicbase/echo-server:functional-644291               | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh findmnt                                                    | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | -T /mount3                                                                       |                   |         |         |                     |                     |
	| image          | functional-644291 image rm                                                       | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | kicbase/echo-server:functional-644291                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| mount          | -p functional-644291                                                             | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC |                     |
	|                | --kill=true                                                                      |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/78349.pem                                                         |                   |         |         |                     |                     |
	| image          | functional-644291 image ls                                                       | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /usr/share/ca-certificates/78349.pem                                             |                   |         |         |                     |                     |
	| image          | functional-644291 image load                                                     | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/51391683.0                                                        |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/test/nested/copy/78349/hosts                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/783492.pem                                                        |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                               | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | -p functional-644291                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                           |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /usr/share/ca-certificates/783492.pem                                            |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                        |                   |         |         |                     |                     |
	| service        | functional-644291 service                                                        | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | hello-node-connect --url                                                         |                   |         |         |                     |                     |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format short                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh pgrep                                                      | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC |                     |
	|                | buildkitd                                                                        |                   |         |         |                     |                     |
	| image          | functional-644291 image build -t                                                 | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | localhost/my-image:functional-644291                                             |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                 |                   |         |         |                     |                     |
	| image          | functional-644291 image ls                                                       | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format yaml                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format json                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format table                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| update-context | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	| update-context | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	| update-context | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:45:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:45:19.853036  124776 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:45:19.853170  124776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.853180  124776 out.go:358] Setting ErrFile to fd 2...
	I0210 12:45:19.853187  124776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.853489  124776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:45:19.854071  124776 out.go:352] Setting JSON to false
	I0210 12:45:19.855111  124776 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12469,"bootTime":1739179051,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:45:19.855223  124776 start.go:139] virtualization: kvm guest
	I0210 12:45:19.857439  124776 out.go:177] * [functional-644291] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0210 12:45:19.860460  124776 notify.go:220] Checking for updates...
	I0210 12:45:19.860507  124776 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:45:19.862100  124776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:45:19.863628  124776 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:45:19.864928  124776 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:45:19.866453  124776 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:45:19.867685  124776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:45:19.869275  124776 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:45:19.869778  124776 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:45:19.894412  124776 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:45:19.894508  124776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:45:19.944105  124776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-10 12:45:19.93510259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:45:19.944255  124776 docker.go:318] overlay module found
	I0210 12:45:19.946960  124776 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0210 12:45:19.948316  124776 start.go:297] selected driver: docker
	I0210 12:45:19.948329  124776 start.go:901] validating driver "docker" against &{Name:functional-644291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-644291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:45:19.948417  124776 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:45:19.950489  124776 out.go:201] 
	W0210 12:45:19.951592  124776 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 12:45:19.952654  124776 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	6a22b231d70a2       07655ddf2eebe       2 minutes ago       Running             kubernetes-dashboard        0                   09de513bd237d       kubernetes-dashboard-7779f9b69b-gn8x2
	9efd0bdd3b3f5       115053965e86b       2 minutes ago       Running             dashboard-metrics-scraper   0                   4c2f5e56a2232       dashboard-metrics-scraper-5d59dccf9b-65dm5
	8f2d5a3cc2d71       82e4c8a736a4f       2 minutes ago       Running             echoserver                  0                   4b7d3cd7debfc       hello-node-connect-58f9cf68d8-2klqt
	30df225eca01f       56cc512116c8f       3 minutes ago       Exited              mount-munger                0                   f046e5795fbb8       busybox-mount
	08960918e1eea       d41a14a4ecff9       3 minutes ago       Running             nginx                       0                   ea1ad2cbf7664       nginx-svc
	eb3dbf3b06c67       82e4c8a736a4f       3 minutes ago       Running             echoserver                  0                   5f2fd218a3f96       hello-node-fcfd88b6f-xt54x
	e1fff5a51528d       6e38f40d628db       3 minutes ago       Running             storage-provisioner         2                   4703b5cc493c8       storage-provisioner
	7563360b4bf4c       95c0bda56fc4d       3 minutes ago       Running             kube-apiserver              0                   cabecb87eb98b       kube-apiserver-functional-644291
	b90900b5647c2       019ee182b58e2       3 minutes ago       Running             kube-controller-manager     1                   74c1dff987266       kube-controller-manager-functional-644291
	76a4e58566e6b       2b0d6572d062c       3 minutes ago       Running             kube-scheduler              1                   b90a45caf9fa6       kube-scheduler-functional-644291
	bf5b244fbab74       a9e7e6b294baf       3 minutes ago       Running             etcd                        1                   641ee0148205f       etcd-functional-644291
	157b8b452cccc       c69fa2e9cbf5f       3 minutes ago       Running             coredns                     1                   ba37af7e18a56       coredns-668d6bf9bc-m4jhh
	cbb9c47f16a76       d300845f67aeb       3 minutes ago       Running             kindnet-cni                 1                   3b04e431ca430       kindnet-f6dcs
	afd91bd053797       6e38f40d628db       3 minutes ago       Exited              storage-provisioner         1                   4703b5cc493c8       storage-provisioner
	e0428c3dd6828       e29f9c7391fd9       3 minutes ago       Running             kube-proxy                  1                   44ff8fa19f88f       kube-proxy-gfv78
	b76a9182fa02c       c69fa2e9cbf5f       4 minutes ago       Exited              coredns                     0                   ba37af7e18a56       coredns-668d6bf9bc-m4jhh
	ac13a6e9bb208       d300845f67aeb       4 minutes ago       Exited              kindnet-cni                 0                   3b04e431ca430       kindnet-f6dcs
	f90fd1eb38c42       e29f9c7391fd9       4 minutes ago       Exited              kube-proxy                  0                   44ff8fa19f88f       kube-proxy-gfv78
	3934b5b229b97       019ee182b58e2       4 minutes ago       Exited              kube-controller-manager     0                   74c1dff987266       kube-controller-manager-functional-644291
	92c950ec71308       a9e7e6b294baf       4 minutes ago       Exited              etcd                        0                   641ee0148205f       etcd-functional-644291
	df688f746edef       2b0d6572d062c       4 minutes ago       Exited              kube-scheduler              0                   b90a45caf9fa6       kube-scheduler-functional-644291
	
	
	==> containerd <==
	Feb 10 12:46:01 functional-644291 containerd[3910]: time="2025-02-10T12:46:01.332590550Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Feb 10 12:46:01 functional-644291 containerd[3910]: time="2025-02-10T12:46:01.335595922Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:01 functional-644291 containerd[3910]: time="2025-02-10T12:46:01.611467132Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:02 functional-644291 containerd[3910]: time="2025-02-10T12:46:02.233481826Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:46:02 functional-644291 containerd[3910]: time="2025-02-10T12:46:02.233572177Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=11042"
	Feb 10 12:46:09 functional-644291 containerd[3910]: time="2025-02-10T12:46:09.332760735Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Feb 10 12:46:09 functional-644291 containerd[3910]: time="2025-02-10T12:46:09.334466907Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:09 functional-644291 containerd[3910]: time="2025-02-10T12:46:09.592977063Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:10 functional-644291 containerd[3910]: time="2025-02-10T12:46:10.217232791Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:46:10 functional-644291 containerd[3910]: time="2025-02-10T12:46:10.217276514Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=11042"
	Feb 10 12:46:51 functional-644291 containerd[3910]: time="2025-02-10T12:46:51.332588852Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Feb 10 12:46:51 functional-644291 containerd[3910]: time="2025-02-10T12:46:51.334296617Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:51 functional-644291 containerd[3910]: time="2025-02-10T12:46:51.602986274Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:52 functional-644291 containerd[3910]: time="2025-02-10T12:46:52.402128182Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:46:52 functional-644291 containerd[3910]: time="2025-02-10T12:46:52.402231819Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=12048"
	Feb 10 12:46:52 functional-644291 containerd[3910]: time="2025-02-10T12:46:52.402984016Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Feb 10 12:46:52 functional-644291 containerd[3910]: time="2025-02-10T12:46:52.404513092Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:52 functional-644291 containerd[3910]: time="2025-02-10T12:46:52.655770232Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:46:53 functional-644291 containerd[3910]: time="2025-02-10T12:46:53.276685423Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:46:53 functional-644291 containerd[3910]: time="2025-02-10T12:46:53.276750118Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=11042"
	Feb 10 12:48:14 functional-644291 containerd[3910]: time="2025-02-10T12:48:14.332428354Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Feb 10 12:48:14 functional-644291 containerd[3910]: time="2025-02-10T12:48:14.334481633Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:48:14 functional-644291 containerd[3910]: time="2025-02-10T12:48:14.624417368Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:48:15 functional-644291 containerd[3910]: time="2025-02-10T12:48:15.237088175Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:48:15 functional-644291 containerd[3910]: time="2025-02-10T12:48:15.237163676Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=11042"
	
	
	==> coredns [157b8b452cccc6d85387940af533c8c744a2fd10d4041e1b27c4062557150f37] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49894 - 21610 "HINFO IN 9000676847841937490.1454512200463263474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033047055s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b76a9182fa02cab23c980eb37427ffbb882d273f661beb2d5bc35583fb4094d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33365 - 64930 "HINFO IN 2136753020773598883.2258325954173043898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043502811s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-644291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-644291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04
	                    minikube.k8s.io/name=functional-644291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T12_43_46_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:43:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-644291
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:48:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:45:42 +0000   Mon, 10 Feb 2025 12:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:45:42 +0000   Mon, 10 Feb 2025 12:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:45:42 +0000   Mon, 10 Feb 2025 12:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:45:42 +0000   Mon, 10 Feb 2025 12:43:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-644291
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 545e95d59278495ca23ad2bc10457f10
	  System UUID:                d0bcccc7-2834-4dcd-b499-02cd12257fab
	  Boot ID:                    1d7cad77-75d7-418d-a590-e8096751a144
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-2klqt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m56s
	  default                     hello-node-fcfd88b6f-xt54x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     mysql-58ccfd96bb-rfvqw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     2m52s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-668d6bf9bc-m4jhh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m26s
	  kube-system                 etcd-functional-644291                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m33s
	  kube-system                 kindnet-f6dcs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m27s
	  kube-system                 kube-apiserver-functional-644291              250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-functional-644291     200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-proxy-gfv78                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-scheduler-functional-644291              100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-65dm5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-gn8x2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m25s                  kube-proxy       
	  Normal   Starting                 3m31s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node functional-644291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node functional-644291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node functional-644291 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    4m31s                  kubelet          Node functional-644291 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 4m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m31s                  kubelet          Node functional-644291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     4m31s                  kubelet          Node functional-644291 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m27s                  node-controller  Node functional-644291 event: Registered Node functional-644291 in Controller
	  Normal   Starting                 3m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-644291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-644291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m38s (x7 over 3m38s)  kubelet          Node functional-644291 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m31s                  node-controller  Node functional-644291 event: Registered Node functional-644291 in Controller
	
	
	==> dmesg <==
	[Feb10 09:17]  #2
	[  +0.001427]  #3
	[  +0.000000]  #4
	[  +0.003161] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003164] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002021] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002123]  #5
	[  +0.000751]  #6
	[  +0.000811]  #7
	[  +0.060730] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.448106] i8042: Warning: Keylock active
	[  +0.009792] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004111] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001792] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.002113] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001740] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.629359] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026636] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.129242] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c950ec7130892c404773ce07718add244342994b701431906b3dddde039bbd] <==
	{"level":"info","ts":"2025-02-10T12:43:41.005175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-02-10T12:43:41.005188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-10T12:43:41.006197Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.006919Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:43:41.006916Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-644291 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T12:43:41.007014Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:43:41.007211Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T12:43:41.007270Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T12:43:41.007805Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:43:41.007925Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:43:41.008094Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.008168Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.008200Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.008752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T12:43:41.008767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-10T12:44:36.963355Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-10T12:44:36.963427Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-644291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-02-10T12:44:36.963518Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-10T12:44:36.963573Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-10T12:44:36.965165Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-10T12:44:36.965200Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-10T12:44:36.965252Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-02-10T12:44:36.966669Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:44:36.966773Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:44:36.966792Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-644291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [bf5b244fbab74abcb0c839536f949c1bec38a52c2c0afcf44027ba78ed3402e3] <==
	{"level":"info","ts":"2025-02-10T12:44:39.208650Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:44:39.208609Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-10T12:44:39.208668Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:44:39.208693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-10T12:44:39.208707Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-10T12:44:39.209197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-02-10T12:44:39.209277Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-02-10T12:44:39.209390Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:44:39.209435Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:44:40.895874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-10T12:44:40.895928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-10T12:44:40.895953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-10T12:44:40.895965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.895987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.896020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.896027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.897112Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-644291 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T12:44:40.897130Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:44:40.897160Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:44:40.897329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T12:44:40.897398Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T12:44:40.898080Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:44:40.898334Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:44:40.898808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T12:44:40.898962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 12:48:16 up  3:30,  0 users,  load average: 0.18, 0.44, 0.35
	Linux functional-644291 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ac13a6e9bb20897fabc062ef2bee26a5e7f402da3d9f3fa6763e4e3953558b7e] <==
	I0210 12:43:53.292010       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:43:53.292272       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0210 12:43:53.292437       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:43:53.292459       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:43:53.292507       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:43:53.785288       1 controller.go:361] Starting controller kube-network-policies
	I0210 12:43:53.785445       1 controller.go:365] Waiting for informer caches to sync
	I0210 12:43:53.785481       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0210 12:43:53.985635       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0210 12:43:53.985668       1 metrics.go:61] Registering metrics
	I0210 12:43:53.985709       1 controller.go:401] Syncing nftables rules
	I0210 12:44:03.788570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:44:03.788647       1 main.go:301] handling current node
	I0210 12:44:13.792550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:44:13.792585       1 main.go:301] handling current node
	I0210 12:44:23.786350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:44:23.786408       1 main.go:301] handling current node
	
	
	==> kindnet [cbb9c47f16a765a1e24f1746f2a90d4e2f87f9333615c38c1d66822e03daa28d] <==
	I0210 12:46:08.085098       1 main.go:301] handling current node
	I0210 12:46:18.086169       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:46:18.086218       1 main.go:301] handling current node
	I0210 12:46:28.085822       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:46:28.085865       1 main.go:301] handling current node
	I0210 12:46:38.092533       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:46:38.092566       1 main.go:301] handling current node
	I0210 12:46:48.085349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:46:48.085417       1 main.go:301] handling current node
	I0210 12:46:58.088359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:46:58.088395       1 main.go:301] handling current node
	I0210 12:47:08.085658       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:47:08.085707       1 main.go:301] handling current node
	I0210 12:47:18.085018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:47:18.085081       1 main.go:301] handling current node
	I0210 12:47:28.085627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:47:28.085679       1 main.go:301] handling current node
	I0210 12:47:38.086376       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:47:38.086423       1 main.go:301] handling current node
	I0210 12:47:48.085290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:47:48.085343       1 main.go:301] handling current node
	I0210 12:47:58.092808       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:47:58.092844       1 main.go:301] handling current node
	I0210 12:48:08.093907       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:48:08.093942       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7563360b4bf4c25cdf4e474bca18c3463b9b315e3236d6f259471e323e2470d4] <==
	I0210 12:44:41.986254       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:44:41.986209       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:44:41.993551       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:44:41.993596       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:44:41.993617       1 policy_source.go:240] refreshing policies
	I0210 12:44:42.006830       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:44:42.085871       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:44:42.394427       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:44:42.837540       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0210 12:44:43.096666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0210 12:44:43.097903       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:44:43.107136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:44:43.739077       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:44:43.826121       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:44:43.874093       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:44:43.879200       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:44:45.345527       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:45:04.610986       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.5.46"}
	I0210 12:45:08.677941       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.131.7"}
	I0210 12:45:10.207028       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.194.156"}
	I0210 12:45:20.885215       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.4.198"}
	I0210 12:45:24.531594       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.44.218"}
	I0210 12:45:24.719708       1 controller.go:615] quota admission added evaluator for: namespaces
	I0210 12:45:24.911621       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.167.0"}
	I0210 12:45:24.927250       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.104.161"}
	
	
	==> kube-controller-manager [3934b5b229b97f5a7ed5e08d976410ea5d27fc50b9ac7c3351eda09b51e82cec] <==
	I0210 12:43:49.262096       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:43:49.262134       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:43:49.262892       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:43:49.263626       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:43:49.264709       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:43:49.267851       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:43:49.269241       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:43:49.270234       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:43:49.281576       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:43:49.872885       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:43:50.412675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="194.785522ms"
	I0210 12:43:50.486723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="73.994992ms"
	I0210 12:43:50.486872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="87.131µs"
	I0210 12:43:50.496230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.074µs"
	I0210 12:43:50.809736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.609113ms"
	I0210 12:43:50.886715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="76.926399ms"
	I0210 12:43:50.886829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.635µs"
	I0210 12:43:52.229371       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="64.682µs"
	I0210 12:43:52.233954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="71.024µs"
	I0210 12:43:52.236812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="84.232µs"
	I0210 12:43:55.626373       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:44:05.247393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.801µs"
	I0210 12:44:05.263441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="5.896954ms"
	I0210 12:44:05.263529       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.365µs"
	I0210 12:44:16.032280       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	
	
	==> kube-controller-manager [b90900b5647c2fefe53e414ca1ee2b3a12e8c5f875d3f25e228206ceb2f2bbcb] <==
	I0210 12:45:24.802506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="7.259479ms"
	E0210 12:45:24.802541       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0210 12:45:24.806781       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="3.0331ms"
	E0210 12:45:24.806819       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-7779f9b69b\" failed with pods \"kubernetes-dashboard-7779f9b69b-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0210 12:45:24.818546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="12.604302ms"
	I0210 12:45:24.826678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="8.067329ms"
	I0210 12:45:24.826940       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="12.389343ms"
	I0210 12:45:24.884798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="92.033µs"
	I0210 12:45:24.889712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="44.096µs"
	I0210 12:45:24.897227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="70.242099ms"
	I0210 12:45:24.897329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="51.913µs"
	I0210 12:45:24.899295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="84.19µs"
	I0210 12:45:26.566449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="66.733µs"
	I0210 12:45:27.576947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="5.804858ms"
	I0210 12:45:27.577051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="40.492µs"
	I0210 12:45:32.606042       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="6.13261ms"
	I0210 12:45:32.606149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="61.449µs"
	I0210 12:45:41.343029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="69.864µs"
	I0210 12:45:42.953739       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:45:55.341091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="98.129µs"
	I0210 12:46:09.340799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="83.906µs"
	I0210 12:46:24.344270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="62.236µs"
	I0210 12:46:39.342187       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="83.264µs"
	I0210 12:47:07.340268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="71.089µs"
	I0210 12:47:21.341538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="67.658µs"
	
	
	==> kube-proxy [e0428c3dd6828e3ae01d646fedaff45d609242bf1de8d2b8a04393132d5c5dd7] <==
	I0210 12:44:27.518664       1 server_linux.go:66] "Using iptables proxy"
	E0210 12:44:27.638774       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0210 12:44:28.683428       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0210 12:44:31.077004       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0210 12:44:35.232829       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0210 12:44:44.645712       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0210 12:44:44.645783       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:44:44.666074       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0210 12:44:44.666133       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:44:44.667984       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:44:44.668338       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:44:44.668369       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:44:44.669553       1 config.go:329] "Starting node config controller"
	I0210 12:44:44.669600       1 config.go:199] "Starting service config controller"
	I0210 12:44:44.669641       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:44:44.669570       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:44:44.669691       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:44:44.669895       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:44:44.770647       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:44:44.770675       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:44:44.770713       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f90fd1eb38c424658fd5f3d345ce02cb4b8add522542df83bd62a7fab236ffa4] <==
	I0210 12:43:51.214304       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:43:51.368983       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0210 12:43:51.369061       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:43:51.388066       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0210 12:43:51.388120       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:43:51.390227       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:43:51.390625       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:43:51.390679       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:43:51.392311       1 config.go:199] "Starting service config controller"
	I0210 12:43:51.392354       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:43:51.392373       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:43:51.392403       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:43:51.392408       1 config.go:329] "Starting node config controller"
	I0210 12:43:51.392438       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:43:51.492547       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:43:51.492565       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:43:51.492626       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [76a4e58566e6b29b7b228f28bbc8199b35156ffdf70e6e549056e1f6be07c1f9] <==
	I0210 12:44:39.896946       1 serving.go:386] Generated self-signed cert in-memory
	W0210 12:44:41.985761       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 12:44:41.985873       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 12:44:41.985934       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 12:44:41.985980       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:44:41.999972       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:44:42.000004       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:44:42.002327       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:44:42.002378       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:44:42.002618       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:44:42.002711       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:44:42.102911       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [df688f746edef92a85218f714697b7e6b8f4ba6655449c4e8ad80899be03b33a] <==
	E0210 12:43:42.789061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789104       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0210 12:43:42.789142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789332       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 12:43:42.789696       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789512       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:42.789729       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789646       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:42.789748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789668       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:42.789807       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.607222       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 12:43:43.607267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.609214       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 12:43:43.609242       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 12:43:43.673863       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 12:43:43.673907       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.726265       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:43.726320       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.752800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:43.752847       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.773288       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 12:43:43.773327       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:43:46.415294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0210 12:44:37.007810       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 10 12:46:39 functional-644291 kubelet[4837]: E0210 12:46:39.332607    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	Feb 10 12:46:39 functional-644291 kubelet[4837]: E0210 12:46:39.333263    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:46:52 functional-644291 kubelet[4837]: E0210 12:46:52.402473    4837 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Feb 10 12:46:52 functional-644291 kubelet[4837]: E0210 12:46:52.402538    4837 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Feb 10 12:46:52 functional-644291 kubelet[4837]: E0210 12:46:52.402839    4837 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scgtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-rfvqw_default(8eb2250f-e0af-4fb5-af1d-34a89b84beb4): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 10 12:46:52 functional-644291 kubelet[4837]: E0210 12:46:52.404126    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:46:53 functional-644291 kubelet[4837]: E0210 12:46:53.276972    4837 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 10 12:46:53 functional-644291 kubelet[4837]: E0210 12:46:53.277038    4837 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 10 12:46:53 functional-644291 kubelet[4837]: E0210 12:46:53.277139    4837 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(ed8df0ea-e7f0-4638-9dcf-db9225cfd833): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 10 12:46:53 functional-644291 kubelet[4837]: E0210 12:46:53.278343    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	Feb 10 12:47:06 functional-644291 kubelet[4837]: E0210 12:47:06.331865    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	Feb 10 12:47:07 functional-644291 kubelet[4837]: E0210 12:47:07.332161    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:47:18 functional-644291 kubelet[4837]: E0210 12:47:18.332121    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	Feb 10 12:47:21 functional-644291 kubelet[4837]: E0210 12:47:21.332782    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:47:30 functional-644291 kubelet[4837]: E0210 12:47:30.331688    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	Feb 10 12:47:33 functional-644291 kubelet[4837]: E0210 12:47:33.333130    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:47:45 functional-644291 kubelet[4837]: E0210 12:47:45.332066    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	Feb 10 12:47:48 functional-644291 kubelet[4837]: E0210 12:47:48.333525    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:47:59 functional-644291 kubelet[4837]: E0210 12:47:59.332101    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	Feb 10 12:47:59 functional-644291 kubelet[4837]: E0210 12:47:59.332819    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:48:10 functional-644291 kubelet[4837]: E0210 12:48:10.332609    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:48:15 functional-644291 kubelet[4837]: E0210 12:48:15.237376    4837 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 10 12:48:15 functional-644291 kubelet[4837]: E0210 12:48:15.237454    4837 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Feb 10 12:48:15 functional-644291 kubelet[4837]: E0210 12:48:15.237562    4837 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2nqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(ed8df0ea-e7f0-4638-9dcf-db9225cfd833): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 10 12:48:15 functional-644291 kubelet[4837]: E0210 12:48:15.238757    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="ed8df0ea-e7f0-4638-9dcf-db9225cfd833"
	
	
	==> kubernetes-dashboard [6a22b231d70a2f45c67d34b67541a3fca8526182633d3954bf6ff7dc06906bf0] <==
	2025/02/10 12:45:31 Using namespace: kubernetes-dashboard
	2025/02/10 12:45:31 Using in-cluster config to connect to apiserver
	2025/02/10 12:45:31 Using secret token for csrf signing
	2025/02/10 12:45:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/10 12:45:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/10 12:45:31 Successful initial request to the apiserver, version: v1.32.1
	2025/02/10 12:45:31 Generating JWE encryption key
	2025/02/10 12:45:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/10 12:45:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/10 12:45:32 Initializing JWE encryption key from synchronized object
	2025/02/10 12:45:32 Creating in-cluster Sidecar client
	2025/02/10 12:45:32 Successful request to sidecar
	2025/02/10 12:45:32 Serving insecurely on HTTP port: 9090
	2025/02/10 12:45:31 Starting overwatch
	
	
	==> storage-provisioner [afd91bd053797ff635d08e8bff3c579ca4cbf5ef1e125be53cfc50aa325b23d3] <==
	I0210 12:44:27.425730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0210 12:44:27.486218       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e1fff5a51528d6f74defed72d370d5d84207d5621bd0adacd571143949f3b9f6] <==
	I0210 12:44:42.686434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 12:44:42.694138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 12:44:42.694191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0210 12:45:00.088492       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0210 12:45:00.088689       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-644291_1042b5d0-ff17-44c1-99ee-2ec9cfc2f9b4!
	I0210 12:45:00.088627       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acae380a-a98c-4b99-ad8b-6b943ca6c9d3", APIVersion:"v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-644291_1042b5d0-ff17-44c1-99ee-2ec9cfc2f9b4 became leader
	I0210 12:45:00.188961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-644291_1042b5d0-ff17-44c1-99ee-2ec9cfc2f9b4!
	I0210 12:45:14.691054       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0210 12:45:14.691169       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4e8ce7ae-3776-4ae4-8e77-8f9fd9ad8ec6 392 0 2025-02-10 12:43:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-10 12:43:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f2d0b2d1-269b-4682-8251-fdca61914f2b 657 0 2025-02-10 12:45:14 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-10 12:45:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-10 12:45:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0210 12:45:14.691666       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b" provisioned
	I0210 12:45:14.691699       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0210 12:45:14.691709       1 volume_store.go:212] Trying to save persistentvolume "pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b"
	I0210 12:45:14.692903       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2d0b2d1-269b-4682-8251-fdca61914f2b", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0210 12:45:14.702051       1 volume_store.go:219] persistentvolume "pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b" saved
	I0210 12:45:14.702243       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2d0b2d1-269b-4682-8251-fdca61914f2b", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-644291 -n functional-644291
helpers_test.go:261: (dbg) Run:  kubectl --context functional-644291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-rfvqw sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-644291 describe pod busybox-mount mysql-58ccfd96bb-rfvqw sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-644291 describe pod busybox-mount mysql-58ccfd96bb-rfvqw sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-644291/192.168.49.2
	Start Time:       Mon, 10 Feb 2025 12:45:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  containerd://30df225eca01f0475a391579ccf8ee2ada556dcf548c063d846476dfea9a8982
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 10 Feb 2025 12:45:15 +0000
	      Finished:     Mon, 10 Feb 2025 12:45:15 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s59l5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-s59l5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m3s  default-scheduler  Successfully assigned default/busybox-mount to functional-644291
	  Normal  Pulling    3m3s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m2s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 689ms (689ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    3m2s  kubelet            Created container: mount-munger
	  Normal  Started    3m2s  kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-rfvqw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-644291/192.168.49.2
	Start Time:       Mon, 10 Feb 2025 12:45:24 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scgtc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-scgtc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  2m52s                 default-scheduler  Successfully assigned default/mysql-58ccfd96bb-rfvqw to functional-644291
	  Warning  Failed     2m7s (x3 over 2m52s)  kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    86s (x4 over 2m52s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     85s (x4 over 2m52s)   kubelet            Error: ErrImagePull
	  Warning  Failed     85s                   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    7s (x10 over 2m51s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7s (x10 over 2m51s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-644291/192.168.49.2
	Start Time:       Mon, 10 Feb 2025 12:45:14 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2nqqk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-2nqqk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-644291
	  Normal   BackOff    18s (x10 over 3m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     18s (x10 over 3m1s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3s (x5 over 3m2s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2s (x5 over 3m1s)    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2s (x5 over 3m1s)    kubelet            Error: ErrImagePull

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0210 12:49:54.876891   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:50:22.585467   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:54:54.876865   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-644291 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-rfvqw" [8eb2250f-e0af-4fb5-af1d-34a89b84beb4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
functional_test.go:1816: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1816: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-644291 -n functional-644291
functional_test.go:1816: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-02-10 12:55:24.85950429 +0000 UTC m=+1384.016671456
functional_test.go:1816: (dbg) Run:  kubectl --context functional-644291 describe po mysql-58ccfd96bb-rfvqw -n default
functional_test.go:1816: (dbg) kubectl --context functional-644291 describe po mysql-58ccfd96bb-rfvqw -n default:
Name:             mysql-58ccfd96bb-rfvqw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-644291/192.168.49.2
Start Time:       Mon, 10 Feb 2025 12:45:24 +0000
Labels:           app=mysql
pod-template-hash=58ccfd96bb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/mysql-58ccfd96bb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scgtc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-scgtc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-58ccfd96bb-rfvqw to functional-644291
Warning  Failed     8m32s                   kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    7m1s (x5 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m (x4 over 9m59s)      kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     7m (x5 over 9m59s)      kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x19 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m28s (x21 over 9m58s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1816: (dbg) Run:  kubectl --context functional-644291 logs mysql-58ccfd96bb-rfvqw -n default
functional_test.go:1816: (dbg) Non-zero exit: kubectl --context functional-644291 logs mysql-58ccfd96bb-rfvqw -n default: exit status 1 (69.702227ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-58ccfd96bb-rfvqw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1816: kubectl --context functional-644291 logs mysql-58ccfd96bb-rfvqw -n default: exit status 1
functional_test.go:1818: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-644291
helpers_test.go:235: (dbg) docker inspect functional-644291:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb",
	        "Created": "2025-02-10T12:43:33.360051285Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 112122,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-10T12:43:33.47439621Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a",
	        "ResolvConfPath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/hosts",
	        "LogPath": "/var/lib/docker/containers/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb/d315ad61861b24985154ce12bb3320b499f3428356f775510c67bceaca57e2cb-json.log",
	        "Name": "/functional-644291",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-644291:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-644291",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d-init/diff:/var/lib/docker/overlay2/9ffca27f7ebed742e3d0dd8f2061c1044c6b8fc8f60ace2c8ab1f353604acf23/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b33f58f3df7bcedbf80edb5fcea5f96c39f831d4247938ae3e872594bd9a025d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-644291",
	                "Source": "/var/lib/docker/volumes/functional-644291/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-644291",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-644291",
	                "name.minikube.sigs.k8s.io": "functional-644291",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0dba4d0eab9542c96779ee5090175871390aebd1d277afc15a4beddb4d24b3bf",
	            "SandboxKey": "/var/run/docker/netns/0dba4d0eab95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-644291": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "abd9025709fcbd16ff16a77b6a748d0822a7f329f09e6e731763e49c8db0ebc9",
	                    "EndpointID": "0068133dce7ba462a5b3d3d47c4276fbe7054969024428094665b5a73c1307f9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-644291",
	                        "d315ad61861b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-644291 -n functional-644291
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-644291 logs -n 25: (1.372384938s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                       Args                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-644291 image save kicbase/echo-server:functional-644291               | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh findmnt                                                    | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | -T /mount3                                                                       |                   |         |         |                     |                     |
	| image          | functional-644291 image rm                                                       | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | kicbase/echo-server:functional-644291                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| mount          | -p functional-644291                                                             | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC |                     |
	|                | --kill=true                                                                      |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/78349.pem                                                         |                   |         |         |                     |                     |
	| image          | functional-644291 image ls                                                       | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /usr/share/ca-certificates/78349.pem                                             |                   |         |         |                     |                     |
	| image          | functional-644291 image load                                                     | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/51391683.0                                                        |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/test/nested/copy/78349/hosts                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/783492.pem                                                        |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                               | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | -p functional-644291                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                           |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /usr/share/ca-certificates/783492.pem                                            |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh sudo cat                                                   | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                        |                   |         |         |                     |                     |
	| service        | functional-644291 service                                                        | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | hello-node-connect --url                                                         |                   |         |         |                     |                     |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format short                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh            | functional-644291 ssh pgrep                                                      | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC |                     |
	|                | buildkitd                                                                        |                   |         |         |                     |                     |
	| image          | functional-644291 image build -t                                                 | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | localhost/my-image:functional-644291                                             |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                 |                   |         |         |                     |                     |
	| image          | functional-644291 image ls                                                       | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format yaml                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format json                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| image          | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | image ls --format table                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| update-context | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	| update-context | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	| update-context | functional-644291                                                                | functional-644291 | jenkins | v1.35.0 | 10 Feb 25 12:45 UTC | 10 Feb 25 12:45 UTC |
	|                | update-context                                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                           |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:45:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:45:19.853036  124776 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:45:19.853170  124776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.853180  124776 out.go:358] Setting ErrFile to fd 2...
	I0210 12:45:19.853187  124776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.853489  124776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:45:19.854071  124776 out.go:352] Setting JSON to false
	I0210 12:45:19.855111  124776 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12469,"bootTime":1739179051,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:45:19.855223  124776 start.go:139] virtualization: kvm guest
	I0210 12:45:19.857439  124776 out.go:177] * [functional-644291] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0210 12:45:19.860460  124776 notify.go:220] Checking for updates...
	I0210 12:45:19.860507  124776 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:45:19.862100  124776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:45:19.863628  124776 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:45:19.864928  124776 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:45:19.866453  124776 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:45:19.867685  124776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:45:19.869275  124776 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:45:19.869778  124776 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:45:19.894412  124776 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:45:19.894508  124776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:45:19.944105  124776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-10 12:45:19.93510259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:45:19.944255  124776 docker.go:318] overlay module found
	I0210 12:45:19.946960  124776 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0210 12:45:19.948316  124776 start.go:297] selected driver: docker
	I0210 12:45:19.948329  124776 start.go:901] validating driver "docker" against &{Name:functional-644291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-644291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:45:19.948417  124776 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:45:19.950489  124776 out.go:201] 
	W0210 12:45:19.951592  124776 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 12:45:19.952654  124776 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	20ac40802e3a2       97662d24417b3       4 minutes ago       Running             myfrontend                  0                   99dc166d66d8f       sp-pod
	6a22b231d70a2       07655ddf2eebe       9 minutes ago       Running             kubernetes-dashboard        0                   09de513bd237d       kubernetes-dashboard-7779f9b69b-gn8x2
	9efd0bdd3b3f5       115053965e86b       9 minutes ago       Running             dashboard-metrics-scraper   0                   4c2f5e56a2232       dashboard-metrics-scraper-5d59dccf9b-65dm5
	8f2d5a3cc2d71       82e4c8a736a4f       10 minutes ago      Running             echoserver                  0                   4b7d3cd7debfc       hello-node-connect-58f9cf68d8-2klqt
	30df225eca01f       56cc512116c8f       10 minutes ago      Exited              mount-munger                0                   f046e5795fbb8       busybox-mount
	08960918e1eea       d41a14a4ecff9       10 minutes ago      Running             nginx                       0                   ea1ad2cbf7664       nginx-svc
	eb3dbf3b06c67       82e4c8a736a4f       10 minutes ago      Running             echoserver                  0                   5f2fd218a3f96       hello-node-fcfd88b6f-xt54x
	e1fff5a51528d       6e38f40d628db       10 minutes ago      Running             storage-provisioner         2                   4703b5cc493c8       storage-provisioner
	7563360b4bf4c       95c0bda56fc4d       10 minutes ago      Running             kube-apiserver              0                   cabecb87eb98b       kube-apiserver-functional-644291
	b90900b5647c2       019ee182b58e2       10 minutes ago      Running             kube-controller-manager     1                   74c1dff987266       kube-controller-manager-functional-644291
	76a4e58566e6b       2b0d6572d062c       10 minutes ago      Running             kube-scheduler              1                   b90a45caf9fa6       kube-scheduler-functional-644291
	bf5b244fbab74       a9e7e6b294baf       10 minutes ago      Running             etcd                        1                   641ee0148205f       etcd-functional-644291
	157b8b452cccc       c69fa2e9cbf5f       10 minutes ago      Running             coredns                     1                   ba37af7e18a56       coredns-668d6bf9bc-m4jhh
	cbb9c47f16a76       d300845f67aeb       10 minutes ago      Running             kindnet-cni                 1                   3b04e431ca430       kindnet-f6dcs
	afd91bd053797       6e38f40d628db       10 minutes ago      Exited              storage-provisioner         1                   4703b5cc493c8       storage-provisioner
	e0428c3dd6828       e29f9c7391fd9       10 minutes ago      Running             kube-proxy                  1                   44ff8fa19f88f       kube-proxy-gfv78
	b76a9182fa02c       c69fa2e9cbf5f       11 minutes ago      Exited              coredns                     0                   ba37af7e18a56       coredns-668d6bf9bc-m4jhh
	ac13a6e9bb208       d300845f67aeb       11 minutes ago      Exited              kindnet-cni                 0                   3b04e431ca430       kindnet-f6dcs
	f90fd1eb38c42       e29f9c7391fd9       11 minutes ago      Exited              kube-proxy                  0                   44ff8fa19f88f       kube-proxy-gfv78
	3934b5b229b97       019ee182b58e2       11 minutes ago      Exited              kube-controller-manager     0                   74c1dff987266       kube-controller-manager-functional-644291
	92c950ec71308       a9e7e6b294baf       11 minutes ago      Exited              etcd                        0                   641ee0148205f       etcd-functional-644291
	df688f746edef       2b0d6572d062c       11 minutes ago      Exited              kube-scheduler              0                   b90a45caf9fa6       kube-scheduler-functional-644291
	
	
	==> containerd <==
	Feb 10 12:48:15 functional-644291 containerd[3910]: time="2025-02-10T12:48:15.237088175Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:48:15 functional-644291 containerd[3910]: time="2025-02-10T12:48:15.237163676Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=11042"
	Feb 10 12:48:23 functional-644291 containerd[3910]: time="2025-02-10T12:48:23.332847868Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Feb 10 12:48:23 functional-644291 containerd[3910]: time="2025-02-10T12:48:23.334667854Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:48:23 functional-644291 containerd[3910]: time="2025-02-10T12:48:23.599300506Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:48:24 functional-644291 containerd[3910]: time="2025-02-10T12:48:24.209398076Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:48:24 functional-644291 containerd[3910]: time="2025-02-10T12:48:24.209470814Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=11042"
	Feb 10 12:50:57 functional-644291 containerd[3910]: time="2025-02-10T12:50:57.332444506Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Feb 10 12:50:57 functional-644291 containerd[3910]: time="2025-02-10T12:50:57.334161275Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:50:57 functional-644291 containerd[3910]: time="2025-02-10T12:50:57.614758793Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.449026044Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.449703204Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=72198947"
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.450945011Z" level=info msg="ImageCreate event name:\"sha256:97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.456016229Z" level=info msg="ImageCreate event name:\"docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.458126414Z" level=info msg="Pulled image \"docker.io/nginx:latest\" with image id \"sha256:97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e\", repo tag \"docker.io/library/nginx:latest\", repo digest \"docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34\", size \"72188133\" in 3.12560763s"
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.458162242Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e\""
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.460844706Z" level=info msg="CreateContainer within sandbox \"99dc166d66d8f77a508ee16b926f841fad3ecb019001a057bc4abe586853dea1\" for container &ContainerMetadata{Name:myfrontend,Attempt:0,}"
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.471556980Z" level=info msg="CreateContainer within sandbox \"99dc166d66d8f77a508ee16b926f841fad3ecb019001a057bc4abe586853dea1\" for &ContainerMetadata{Name:myfrontend,Attempt:0,} returns container id \"20ac40802e3a2e20bb9b3251c09a55a867b45e1baa6a25250844fc7b4e45b48b\""
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.471994838Z" level=info msg="StartContainer for \"20ac40802e3a2e20bb9b3251c09a55a867b45e1baa6a25250844fc7b4e45b48b\""
	Feb 10 12:51:00 functional-644291 containerd[3910]: time="2025-02-10T12:51:00.510547364Z" level=info msg="StartContainer for \"20ac40802e3a2e20bb9b3251c09a55a867b45e1baa6a25250844fc7b4e45b48b\" returns successfully"
	Feb 10 12:51:07 functional-644291 containerd[3910]: time="2025-02-10T12:51:07.333341237Z" level=info msg="PullImage \"docker.io/mysql:5.7\""
	Feb 10 12:51:07 functional-644291 containerd[3910]: time="2025-02-10T12:51:07.335023363Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:51:07 functional-644291 containerd[3910]: time="2025-02-10T12:51:07.588215529Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Feb 10 12:51:08 functional-644291 containerd[3910]: time="2025-02-10T12:51:08.214198030Z" level=error msg="PullImage \"docker.io/mysql:5.7\" failed" error="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Feb 10 12:51:08 functional-644291 containerd[3910]: time="2025-02-10T12:51:08.214266997Z" level=info msg="stop pulling image docker.io/library/mysql:5.7: active requests=0, bytes read=11042"
	
	
	==> coredns [157b8b452cccc6d85387940af533c8c744a2fd10d4041e1b27c4062557150f37] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49894 - 21610 "HINFO IN 9000676847841937490.1454512200463263474. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033047055s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b76a9182fa02cab23c980eb37427ffbb882d273f661beb2d5bc35583fb4094d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:33365 - 64930 "HINFO IN 2136753020773598883.2258325954173043898. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043502811s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-644291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-644291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7d7e9539cf1c3abd6114cdafa89e43b830da4e04
	                    minikube.k8s.io/name=functional-644291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T12_43_46_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:43:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-644291
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:55:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:54:43 +0000   Mon, 10 Feb 2025 12:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:54:43 +0000   Mon, 10 Feb 2025 12:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:54:43 +0000   Mon, 10 Feb 2025 12:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:54:43 +0000   Mon, 10 Feb 2025 12:43:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-644291
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859368Ki
	  pods:               110
	System Info:
	  Machine ID:                 545e95d59278495ca23ad2bc10457f10
	  System UUID:                d0bcccc7-2834-4dcd-b499-02cd12257fab
	  Boot ID:                    1d7cad77-75d7-418d-a590-e8096751a144
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.24
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-connect-58f9cf68d8-2klqt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-fcfd88b6f-xt54x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-58ccfd96bb-rfvqw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-668d6bf9bc-m4jhh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-644291                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-f6dcs                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-644291              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-644291     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-gfv78                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-644291              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-5d59dccf9b-65dm5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-7779f9b69b-gn8x2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-644291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-644291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-644291 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node functional-644291 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node functional-644291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     11m                kubelet          Node functional-644291 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                node-controller  Node functional-644291 event: Registered Node functional-644291 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-644291 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-644291 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-644291 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-644291 event: Registered Node functional-644291 in Controller
	
	
	==> dmesg <==
	[Feb10 09:17]  #2
	[  +0.001427]  #3
	[  +0.000000]  #4
	[  +0.003161] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003164] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002021] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002123]  #5
	[  +0.000751]  #6
	[  +0.000811]  #7
	[  +0.060730] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.448106] i8042: Warning: Keylock active
	[  +0.009792] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004111] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001792] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.002113] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001740] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000002] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.629359] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026636] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.129242] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c950ec7130892c404773ce07718add244342994b701431906b3dddde039bbd] <==
	{"level":"info","ts":"2025-02-10T12:43:41.005175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-02-10T12:43:41.005188Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-10T12:43:41.006197Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.006919Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:43:41.006916Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-644291 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T12:43:41.007014Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:43:41.007211Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T12:43:41.007270Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T12:43:41.007805Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:43:41.007925Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:43:41.008094Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.008168Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.008200Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:43:41.008752Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T12:43:41.008767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-10T12:44:36.963355Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-02-10T12:44:36.963427Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"functional-644291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2025-02-10T12:44:36.963518Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-10T12:44:36.963573Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-10T12:44:36.965165Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-02-10T12:44:36.965200Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-02-10T12:44:36.965252Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-02-10T12:44:36.966669Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:44:36.966773Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-10T12:44:36.966792Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"functional-644291","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [bf5b244fbab74abcb0c839536f949c1bec38a52c2c0afcf44027ba78ed3402e3] <==
	{"level":"info","ts":"2025-02-10T12:44:39.208693Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-10T12:44:39.208707Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-10T12:44:39.209197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2025-02-10T12:44:39.209277Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-02-10T12:44:39.209390Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:44:39.209435Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:44:40.895874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-10T12:44:40.895928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-10T12:44:40.895953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-10T12:44:40.895965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.895987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.896020Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.896027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-02-10T12:44:40.897112Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-644291 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T12:44:40.897130Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:44:40.897160Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:44:40.897329Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T12:44:40.897398Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T12:44:40.898080Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:44:40.898334Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:44:40.898808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T12:44:40.898962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-10T12:54:40.914579Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1121}
	{"level":"info","ts":"2025-02-10T12:54:40.926616Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":1121,"took":"11.68863ms","hash":1805867378,"current-db-size-bytes":4169728,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1576960,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-02-10T12:54:40.926666Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1805867378,"revision":1121,"compact-revision":-1}
	
	
	==> kernel <==
	 12:55:26 up  3:37,  0 users,  load average: 0.01, 0.16, 0.25
	Linux functional-644291 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [ac13a6e9bb20897fabc062ef2bee26a5e7f402da3d9f3fa6763e4e3953558b7e] <==
	I0210 12:43:53.292010       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:43:53.292272       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0210 12:43:53.292437       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:43:53.292459       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:43:53.292507       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:43:53.785288       1 controller.go:361] Starting controller kube-network-policies
	I0210 12:43:53.785445       1 controller.go:365] Waiting for informer caches to sync
	I0210 12:43:53.785481       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0210 12:43:53.985635       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0210 12:43:53.985668       1 metrics.go:61] Registering metrics
	I0210 12:43:53.985709       1 controller.go:401] Syncing nftables rules
	I0210 12:44:03.788570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:44:03.788647       1 main.go:301] handling current node
	I0210 12:44:13.792550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:44:13.792585       1 main.go:301] handling current node
	I0210 12:44:23.786350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:44:23.786408       1 main.go:301] handling current node
	
	
	==> kindnet [cbb9c47f16a765a1e24f1746f2a90d4e2f87f9333615c38c1d66822e03daa28d] <==
	I0210 12:53:18.088603       1 main.go:301] handling current node
	I0210 12:53:28.085609       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:53:28.085673       1 main.go:301] handling current node
	I0210 12:53:38.093881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:53:38.093924       1 main.go:301] handling current node
	I0210 12:53:48.086585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:53:48.086619       1 main.go:301] handling current node
	I0210 12:53:58.085571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:53:58.085611       1 main.go:301] handling current node
	I0210 12:54:08.085634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:54:08.085678       1 main.go:301] handling current node
	I0210 12:54:18.092558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:54:18.092599       1 main.go:301] handling current node
	I0210 12:54:28.085638       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:54:28.085686       1 main.go:301] handling current node
	I0210 12:54:38.094769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:54:38.094823       1 main.go:301] handling current node
	I0210 12:54:48.087201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:54:48.087240       1 main.go:301] handling current node
	I0210 12:54:58.085292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:54:58.085327       1 main.go:301] handling current node
	I0210 12:55:08.088542       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:55:08.088578       1 main.go:301] handling current node
	I0210 12:55:18.092551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0210 12:55:18.092584       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7563360b4bf4c25cdf4e474bca18c3463b9b315e3236d6f259471e323e2470d4] <==
	I0210 12:44:41.986254       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:44:41.986209       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:44:41.993551       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:44:41.993596       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:44:41.993617       1 policy_source.go:240] refreshing policies
	I0210 12:44:42.006830       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:44:42.085871       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:44:42.394427       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:44:42.837540       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0210 12:44:43.096666       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0210 12:44:43.097903       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:44:43.107136       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:44:43.739077       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:44:43.826121       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:44:43.874093       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:44:43.879200       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:44:45.345527       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:45:04.610986       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.5.46"}
	I0210 12:45:08.677941       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.131.7"}
	I0210 12:45:10.207028       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.194.156"}
	I0210 12:45:20.885215       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.4.198"}
	I0210 12:45:24.531594       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.44.218"}
	I0210 12:45:24.719708       1 controller.go:615] quota admission added evaluator for: namespaces
	I0210 12:45:24.911621       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.167.0"}
	I0210 12:45:24.927250       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.104.161"}
	
	
	==> kube-controller-manager [3934b5b229b97f5a7ed5e08d976410ea5d27fc50b9ac7c3351eda09b51e82cec] <==
	I0210 12:43:49.262096       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:43:49.262134       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:43:49.262892       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:43:49.263626       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:43:49.264709       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:43:49.267851       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:43:49.269241       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:43:49.270234       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:43:49.281576       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:43:49.872885       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:43:50.412675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="194.785522ms"
	I0210 12:43:50.486723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="73.994992ms"
	I0210 12:43:50.486872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="87.131µs"
	I0210 12:43:50.496230       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="54.074µs"
	I0210 12:43:50.809736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.609113ms"
	I0210 12:43:50.886715       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="76.926399ms"
	I0210 12:43:50.886829       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="63.635µs"
	I0210 12:43:52.229371       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="64.682µs"
	I0210 12:43:52.233954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="71.024µs"
	I0210 12:43:52.236812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="84.232µs"
	I0210 12:43:55.626373       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:44:05.247393       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="92.801µs"
	I0210 12:44:05.263441       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="5.896954ms"
	I0210 12:44:05.263529       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.365µs"
	I0210 12:44:16.032280       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	
	
	==> kube-controller-manager [b90900b5647c2fefe53e414ca1ee2b3a12e8c5f875d3f25e228206ceb2f2bbcb] <==
	I0210 12:45:24.884798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="92.033µs"
	I0210 12:45:24.889712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="44.096µs"
	I0210 12:45:24.897227       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="70.242099ms"
	I0210 12:45:24.897329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="51.913µs"
	I0210 12:45:24.899295       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="84.19µs"
	I0210 12:45:26.566449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="66.733µs"
	I0210 12:45:27.576947       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="5.804858ms"
	I0210 12:45:27.577051       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5d59dccf9b" duration="40.492µs"
	I0210 12:45:32.606042       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="6.13261ms"
	I0210 12:45:32.606149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-7779f9b69b" duration="61.449µs"
	I0210 12:45:41.343029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="69.864µs"
	I0210 12:45:42.953739       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:45:55.341091       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="98.129µs"
	I0210 12:46:09.340799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="83.906µs"
	I0210 12:46:24.344270       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="62.236µs"
	I0210 12:46:39.342187       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="83.264µs"
	I0210 12:47:07.340268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="71.089µs"
	I0210 12:47:21.341538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="67.658µs"
	I0210 12:48:38.341621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="76.109µs"
	I0210 12:48:53.340690       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="78.86µs"
	I0210 12:49:47.550244       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:51:09.782416       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	I0210 12:51:19.342775       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="123.115µs"
	I0210 12:51:32.344031       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-58ccfd96bb" duration="102.993µs"
	I0210 12:54:43.833148       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="functional-644291"
	
	
	==> kube-proxy [e0428c3dd6828e3ae01d646fedaff45d609242bf1de8d2b8a04393132d5c5dd7] <==
	I0210 12:44:27.518664       1 server_linux.go:66] "Using iptables proxy"
	E0210 12:44:27.638774       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0210 12:44:28.683428       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0210 12:44:31.077004       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0210 12:44:35.232829       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-644291\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0210 12:44:44.645712       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0210 12:44:44.645783       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:44:44.666074       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0210 12:44:44.666133       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:44:44.667984       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:44:44.668338       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:44:44.668369       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:44:44.669553       1 config.go:329] "Starting node config controller"
	I0210 12:44:44.669600       1 config.go:199] "Starting service config controller"
	I0210 12:44:44.669641       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:44:44.669570       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:44:44.669691       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:44:44.669895       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:44:44.770647       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:44:44.770675       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:44:44.770713       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [f90fd1eb38c424658fd5f3d345ce02cb4b8add522542df83bd62a7fab236ffa4] <==
	I0210 12:43:51.214304       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:43:51.368983       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0210 12:43:51.369061       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:43:51.388066       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0210 12:43:51.388120       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:43:51.390227       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:43:51.390625       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:43:51.390679       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:43:51.392311       1 config.go:199] "Starting service config controller"
	I0210 12:43:51.392354       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:43:51.392373       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:43:51.392403       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:43:51.392408       1 config.go:329] "Starting node config controller"
	I0210 12:43:51.392438       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:43:51.492547       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:43:51.492565       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:43:51.492626       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [76a4e58566e6b29b7b228f28bbc8199b35156ffdf70e6e549056e1f6be07c1f9] <==
	I0210 12:44:39.896946       1 serving.go:386] Generated self-signed cert in-memory
	W0210 12:44:41.985761       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 12:44:41.985873       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 12:44:41.985934       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 12:44:41.985980       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:44:41.999972       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:44:42.000004       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:44:42.002327       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:44:42.002378       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:44:42.002618       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:44:42.002711       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:44:42.102911       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [df688f746edef92a85218f714697b7e6b8f4ba6655449c4e8ad80899be03b33a] <==
	E0210 12:43:42.789061       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789104       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0210 12:43:42.789142       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789332       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 12:43:42.789696       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789512       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:42.789729       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789646       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:42.789748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:42.789668       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:42.789807       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.607222       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 12:43:43.607267       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.609214       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 12:43:43.609242       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 12:43:43.673863       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 12:43:43.673907       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.726265       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:43.726320       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.752800       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 12:43:43.752847       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 12:43:43.773288       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 12:43:43.773327       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:43:46.415294       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0210 12:44:37.007810       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 10 12:50:56 functional-644291 kubelet[4837]: E0210 12:50:56.332622    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:51:01 functional-644291 kubelet[4837]: I0210 12:51:01.316127    4837 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.121447411 podStartE2EDuration="5m47.316104346s" podCreationTimestamp="2025-02-10 12:45:14 +0000 UTC" firstStartedPulling="2025-02-10 12:45:15.264950985 +0000 UTC m=+37.017898648" lastFinishedPulling="2025-02-10 12:51:00.45960791 +0000 UTC m=+382.212555583" observedRunningTime="2025-02-10 12:51:01.3157631 +0000 UTC m=+383.068710791" watchObservedRunningTime="2025-02-10 12:51:01.316104346 +0000 UTC m=+383.069052028"
	Feb 10 12:51:08 functional-644291 kubelet[4837]: E0210 12:51:08.214538    4837 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Feb 10 12:51:08 functional-644291 kubelet[4837]: E0210 12:51:08.214609    4837 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Feb 10 12:51:08 functional-644291 kubelet[4837]: E0210 12:51:08.214793    4837 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-scgtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-58ccfd96bb-rfvqw_default(8eb2250f-e0af-4fb5-af1d-34a89b84beb4): ErrImagePull: failed to pull and unpack image \"docker.io/library/mysql:5.7\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Feb 10 12:51:08 functional-644291 kubelet[4837]: E0210 12:51:08.216010    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:51:19 functional-644291 kubelet[4837]: E0210 12:51:19.333138    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:51:32 functional-644291 kubelet[4837]: E0210 12:51:32.335499    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:51:46 functional-644291 kubelet[4837]: E0210 12:51:46.333094    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:52:00 functional-644291 kubelet[4837]: E0210 12:52:00.332805    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:52:15 functional-644291 kubelet[4837]: E0210 12:52:15.333296    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:52:30 functional-644291 kubelet[4837]: E0210 12:52:30.332861    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:52:42 functional-644291 kubelet[4837]: E0210 12:52:42.332793    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:52:57 functional-644291 kubelet[4837]: E0210 12:52:57.333042    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:53:10 functional-644291 kubelet[4837]: E0210 12:53:10.332186    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:53:24 functional-644291 kubelet[4837]: E0210 12:53:24.333139    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:53:38 functional-644291 kubelet[4837]: E0210 12:53:38.333509    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:53:49 functional-644291 kubelet[4837]: E0210 12:53:49.332154    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:54:00 functional-644291 kubelet[4837]: E0210 12:54:00.332734    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:54:14 functional-644291 kubelet[4837]: E0210 12:54:14.332323    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:54:26 functional-644291 kubelet[4837]: E0210 12:54:26.333191    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:54:38 functional-644291 kubelet[4837]: E0210 12:54:38.332824    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:54:50 functional-644291 kubelet[4837]: E0210 12:54:50.333035    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:55:02 functional-644291 kubelet[4837]: E0210 12:55:02.332835    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	Feb 10 12:55:17 functional-644291 kubelet[4837]: E0210 12:55:17.333208    4837 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/mysql:5.7\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-58ccfd96bb-rfvqw" podUID="8eb2250f-e0af-4fb5-af1d-34a89b84beb4"
	
	
	==> kubernetes-dashboard [6a22b231d70a2f45c67d34b67541a3fca8526182633d3954bf6ff7dc06906bf0] <==
	2025/02/10 12:45:31 Starting overwatch
	2025/02/10 12:45:31 Using namespace: kubernetes-dashboard
	2025/02/10 12:45:31 Using in-cluster config to connect to apiserver
	2025/02/10 12:45:31 Using secret token for csrf signing
	2025/02/10 12:45:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/02/10 12:45:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/02/10 12:45:31 Successful initial request to the apiserver, version: v1.32.1
	2025/02/10 12:45:31 Generating JWE encryption key
	2025/02/10 12:45:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/02/10 12:45:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/02/10 12:45:32 Initializing JWE encryption key from synchronized object
	2025/02/10 12:45:32 Creating in-cluster Sidecar client
	2025/02/10 12:45:32 Successful request to sidecar
	2025/02/10 12:45:32 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [afd91bd053797ff635d08e8bff3c579ca4cbf5ef1e125be53cfc50aa325b23d3] <==
	I0210 12:44:27.425730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0210 12:44:27.486218       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e1fff5a51528d6f74defed72d370d5d84207d5621bd0adacd571143949f3b9f6] <==
	I0210 12:44:42.686434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0210 12:44:42.694138       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0210 12:44:42.694191       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0210 12:45:00.088492       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0210 12:45:00.088689       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-644291_1042b5d0-ff17-44c1-99ee-2ec9cfc2f9b4!
	I0210 12:45:00.088627       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acae380a-a98c-4b99-ad8b-6b943ca6c9d3", APIVersion:"v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-644291_1042b5d0-ff17-44c1-99ee-2ec9cfc2f9b4 became leader
	I0210 12:45:00.188961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-644291_1042b5d0-ff17-44c1-99ee-2ec9cfc2f9b4!
	I0210 12:45:14.691054       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0210 12:45:14.691169       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4e8ce7ae-3776-4ae4-8e77-8f9fd9ad8ec6 392 0 2025-02-10 12:43:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2025-02-10 12:43:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f2d0b2d1-269b-4682-8251-fdca61914f2b 657 0 2025-02-10 12:45:14 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2025-02-10 12:45:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2025-02-10 12:45:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0210 12:45:14.691666       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b" provisioned
	I0210 12:45:14.691699       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0210 12:45:14.691709       1 volume_store.go:212] Trying to save persistentvolume "pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b"
	I0210 12:45:14.692903       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2d0b2d1-269b-4682-8251-fdca61914f2b", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0210 12:45:14.702051       1 volume_store.go:219] persistentvolume "pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b" saved
	I0210 12:45:14.702243       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f2d0b2d1-269b-4682-8251-fdca61914f2b", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f2d0b2d1-269b-4682-8251-fdca61914f2b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-644291 -n functional-644291
helpers_test.go:261: (dbg) Run:  kubectl --context functional-644291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-58ccfd96bb-rfvqw
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-644291 describe pod busybox-mount mysql-58ccfd96bb-rfvqw
helpers_test.go:282: (dbg) kubectl --context functional-644291 describe pod busybox-mount mysql-58ccfd96bb-rfvqw:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-644291/192.168.49.2
	Start Time:       Mon, 10 Feb 2025 12:45:14 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  containerd://30df225eca01f0475a391579ccf8ee2ada556dcf548c063d846476dfea9a8982
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 10 Feb 2025 12:45:15 +0000
	      Finished:     Mon, 10 Feb 2025 12:45:15 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s59l5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-s59l5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-644291
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 689ms (689ms including waiting). Image size: 2395207 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-58ccfd96bb-rfvqw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-644291/192.168.49.2
	Start Time:       Mon, 10 Feb 2025 12:45:24 +0000
	Labels:           app=mysql
	                  pod-template-hash=58ccfd96bb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/mysql-58ccfd96bb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-scgtc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-scgtc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-58ccfd96bb-rfvqw to functional-644291
	  Warning  Failed     8m35s                 kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m3s (x4 over 10m)    kubelet            Failed to pull image "docker.io/mysql:5.7": failed to pull and unpack image "docker.io/library/mysql:5.7": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/mysql/manifests/sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m55s (x19 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m31s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (481.039078ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:361: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image load --daemon kicbase/echo-server:functional-644291 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-644291" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image load --daemon kicbase/echo-server:functional-644291 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls
functional_test.go:463: expected "kicbase/echo-server:functional-644291" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:252: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (433.642822ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:254: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image save kicbase/echo-server:functional-644291 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:403: expected "/home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:428: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0210 12:45:23.265377  126790 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:45:23.265692  126790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:23.265703  126790 out.go:358] Setting ErrFile to fd 2...
	I0210 12:45:23.265707  126790 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:23.265909  126790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:45:23.266511  126790 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:45:23.266608  126790 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:45:23.266980  126790 cli_runner.go:164] Run: docker container inspect functional-644291 --format={{.State.Status}}
	I0210 12:45:23.285471  126790 ssh_runner.go:195] Run: systemctl --version
	I0210 12:45:23.285520  126790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644291
	I0210 12:45:23.303852  126790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/functional-644291/id_rsa Username:docker}
	I0210 12:45:23.392987  126790 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar
	W0210 12:45:23.393065  126790 cache_images.go:253] Failed to load cached images for "functional-644291": loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar: no such file or directory
	I0210 12:45:23.393094  126790 cache_images.go:265] failed pushing to: functional-644291

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-644291
functional_test.go:436: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-644291: exit status 1 (16.916384ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-644291

                                                
                                                
** /stderr **
functional_test.go:438: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-644291

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    

Test pass (295/331)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.19
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.1/json-events 4.6
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.2
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.09
21 TestBinaryMirror 0.75
22 TestOffline 52.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 140.25
29 TestAddons/serial/Volcano 38.51
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 7.44
35 TestAddons/parallel/Registry 14.71
36 TestAddons/parallel/Ingress 355.98
37 TestAddons/parallel/InspektorGadget 10.7
38 TestAddons/parallel/MetricsServer 6.8
40 TestAddons/parallel/CSI 54.7
41 TestAddons/parallel/Headlamp 15.39
42 TestAddons/parallel/CloudSpanner 5.61
44 TestAddons/parallel/NvidiaDevicePlugin 6.48
45 TestAddons/parallel/Yakd 10.63
46 TestAddons/parallel/AmdGpuDevicePlugin 5.45
47 TestAddons/StoppedEnableDisable 12.12
48 TestCertOptions 25.53
49 TestCertExpiration 213.28
51 TestForceSystemdFlag 30.24
52 TestForceSystemdEnv 33.65
54 TestKVMDriverInstallOrUpdate 3.52
58 TestErrorSpam/setup 22.59
59 TestErrorSpam/start 0.58
60 TestErrorSpam/status 0.87
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.57
63 TestErrorSpam/stop 1.37
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 38.65
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.26
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.85
75 TestFunctional/serial/CacheCmd/cache/add_local 1.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 43.33
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.4
86 TestFunctional/serial/LogsFileCmd 1.4
87 TestFunctional/serial/InvalidService 4.06
89 TestFunctional/parallel/ConfigCmd 0.37
90 TestFunctional/parallel/DashboardCmd 13.64
91 TestFunctional/parallel/DryRun 0.36
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.96
97 TestFunctional/parallel/ServiceCmdConnect 7.66
98 TestFunctional/parallel/AddonsCmd 0.14
101 TestFunctional/parallel/SSHCmd 0.54
102 TestFunctional/parallel/CpCmd 1.89
104 TestFunctional/parallel/FileSync 0.26
105 TestFunctional/parallel/CertSync 1.58
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
113 TestFunctional/parallel/License 0.19
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.18
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.26
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
121 TestFunctional/parallel/ProfileCmd/profile_list 0.37
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
123 TestFunctional/parallel/MountCmd/any-port 6.84
124 TestFunctional/parallel/ServiceCmd/List 0.31
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
127 TestFunctional/parallel/ServiceCmd/Format 0.33
128 TestFunctional/parallel/ServiceCmd/URL 0.34
129 TestFunctional/parallel/MountCmd/specific-port 1.65
130 TestFunctional/parallel/Version/short 0.05
131 TestFunctional/parallel/Version/components 0.49
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.12
145 TestFunctional/parallel/MountCmd/VerifyCleanup 1.82
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 94.41
163 TestMultiControlPlane/serial/DeployApp 3.99
164 TestMultiControlPlane/serial/PingHostFromPods 1.04
165 TestMultiControlPlane/serial/AddWorkerNode 21.01
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
168 TestMultiControlPlane/serial/CopyFile 15.85
169 TestMultiControlPlane/serial/StopSecondaryNode 12.5
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
171 TestMultiControlPlane/serial/RestartSecondaryNode 15.34
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.73
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.18
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/StopCluster 35.64
177 TestMultiControlPlane/serial/RestartCluster 83.95
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 36.67
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
184 TestJSONOutput/start/Command 54.89
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.65
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.58
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.61
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.21
209 TestKicCustomNetwork/create_custom_network 27.03
210 TestKicCustomNetwork/use_default_bridge_network 22.83
211 TestKicExistingNetwork 25.01
212 TestKicCustomSubnet 26.16
213 TestKicStaticIP 24.27
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 46.69
218 TestMountStart/serial/StartWithMountFirst 5.3
219 TestMountStart/serial/VerifyMountFirst 0.25
220 TestMountStart/serial/StartWithMountSecond 7.8
221 TestMountStart/serial/VerifyMountSecond 0.24
222 TestMountStart/serial/DeleteFirst 1.61
223 TestMountStart/serial/VerifyMountPostDelete 0.25
224 TestMountStart/serial/Stop 1.18
225 TestMountStart/serial/RestartStopped 6.68
226 TestMountStart/serial/VerifyMountPostStop 0.24
229 TestMultiNode/serial/FreshStart2Nodes 67.55
230 TestMultiNode/serial/DeployApp2Nodes 17.3
231 TestMultiNode/serial/PingHostFrom2Pods 0.71
232 TestMultiNode/serial/AddNode 17.4
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.62
235 TestMultiNode/serial/CopyFile 8.95
236 TestMultiNode/serial/StopNode 2.09
237 TestMultiNode/serial/StartAfterStop 8.6
238 TestMultiNode/serial/RestartKeepsNodes 78.51
239 TestMultiNode/serial/DeleteNode 4.95
240 TestMultiNode/serial/StopMultiNode 23.83
241 TestMultiNode/serial/RestartMultiNode 48.12
242 TestMultiNode/serial/ValidateNameConflict 22.36
247 TestPreload 91.12
249 TestScheduledStopUnix 96.66
252 TestInsufficientStorage 12.35
253 TestRunningBinaryUpgrade 63.04
255 TestKubernetesUpgrade 320.13
256 TestMissingContainerUpgrade 107.24
258 TestStoppedBinaryUpgrade/Setup 0.37
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 37.7
261 TestStoppedBinaryUpgrade/Upgrade 132.88
262 TestNoKubernetes/serial/StartWithStopK8s 8.07
263 TestNoKubernetes/serial/Start 4.2
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
265 TestNoKubernetes/serial/ProfileList 3.54
266 TestNoKubernetes/serial/Stop 1.49
267 TestNoKubernetes/serial/StartNoArgs 6.4
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
283 TestNetworkPlugins/group/false 3.27
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
289 TestPause/serial/Start 45.66
290 TestPause/serial/SecondStartNoReconfiguration 7.04
291 TestPause/serial/Pause 0.67
292 TestPause/serial/VerifyStatus 0.29
293 TestPause/serial/Unpause 0.61
294 TestPause/serial/PauseAgain 0.76
295 TestPause/serial/DeletePaused 6
296 TestPause/serial/VerifyDeletedResources 17.24
298 TestStartStop/group/old-k8s-version/serial/FirstStart 133.16
300 TestStartStop/group/no-preload/serial/FirstStart 62.21
302 TestStartStop/group/embed-certs/serial/FirstStart 42.39
303 TestStartStop/group/no-preload/serial/DeployApp 8.28
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
305 TestStartStop/group/no-preload/serial/Stop 12.02
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 285.4
308 TestStartStop/group/embed-certs/serial/DeployApp 7.3
309 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
310 TestStartStop/group/embed-certs/serial/Stop 11.98
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
312 TestStartStop/group/embed-certs/serial/SecondStart 262.72
313 TestStartStop/group/old-k8s-version/serial/DeployApp 8.48
314 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.81
315 TestStartStop/group/old-k8s-version/serial/Stop 11.98
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
317 TestStartStop/group/old-k8s-version/serial/SecondStart 26.99
318 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 22.01
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
320 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
321 TestStartStop/group/old-k8s-version/serial/Pause 2.53
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.8
325 TestStartStop/group/newest-cni/serial/FirstStart 25.56
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.25
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
329 TestStartStop/group/newest-cni/serial/Stop 1.21
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
331 TestStartStop/group/newest-cni/serial/SecondStart 12.98
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.88
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
337 TestStartStop/group/newest-cni/serial/Pause 2.77
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 265.39
340 TestNetworkPlugins/group/auto/Start 44.62
341 TestNetworkPlugins/group/auto/KubeletFlags 0.28
342 TestNetworkPlugins/group/auto/NetCatPod 9.22
343 TestNetworkPlugins/group/auto/DNS 0.12
344 TestNetworkPlugins/group/auto/Localhost 0.1
345 TestNetworkPlugins/group/auto/HairPin 0.1
346 TestNetworkPlugins/group/kindnet/Start 41.88
347 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
348 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
350 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
351 TestStartStop/group/no-preload/serial/Pause 2.69
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
353 TestNetworkPlugins/group/calico/Start 54.18
354 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
355 TestStartStop/group/embed-certs/serial/Pause 3.11
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/custom-flannel/Start 43.2
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.3
360 TestNetworkPlugins/group/kindnet/DNS 0.14
361 TestNetworkPlugins/group/kindnet/Localhost 0.13
362 TestNetworkPlugins/group/kindnet/HairPin 0.19
363 TestNetworkPlugins/group/enable-default-cni/Start 64.95
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.22
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.27
368 TestNetworkPlugins/group/calico/NetCatPod 9.21
369 TestNetworkPlugins/group/custom-flannel/DNS 0.14
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
372 TestNetworkPlugins/group/calico/DNS 0.13
373 TestNetworkPlugins/group/calico/Localhost 0.12
374 TestNetworkPlugins/group/calico/HairPin 0.14
375 TestNetworkPlugins/group/flannel/Start 39.37
376 TestNetworkPlugins/group/bridge/Start 41.39
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
384 TestNetworkPlugins/group/flannel/NetCatPod 9.2
385 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
386 TestNetworkPlugins/group/bridge/NetCatPod 8.2
387 TestNetworkPlugins/group/flannel/DNS 0.14
388 TestNetworkPlugins/group/flannel/Localhost 0.12
389 TestNetworkPlugins/group/flannel/HairPin 0.11
390 TestNetworkPlugins/group/bridge/DNS 0.12
391 TestNetworkPlugins/group/bridge/Localhost 0.1
392 TestNetworkPlugins/group/bridge/HairPin 0.1
393 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
394 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
395 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
396 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
x
+
TestDownloadOnly/v1.20.0/json-events (6.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-424031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-424031 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.191933033s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0210 12:32:27.075888   78349 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0210 12:32:27.076032   78349 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-424031
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-424031: exit status 85 (63.669942ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-424031 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC |          |
	|         | -p download-only-424031        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:32:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:32:20.927122   78361 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:32:20.927241   78361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:32:20.927251   78361 out.go:358] Setting ErrFile to fd 2...
	I0210 12:32:20.927256   78361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:32:20.927455   78361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	W0210 12:32:20.927572   78361 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20390-71607/.minikube/config/config.json: open /home/jenkins/minikube-integration/20390-71607/.minikube/config/config.json: no such file or directory
	I0210 12:32:20.928119   78361 out.go:352] Setting JSON to true
	I0210 12:32:20.929073   78361 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11690,"bootTime":1739179051,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:32:20.929178   78361 start.go:139] virtualization: kvm guest
	I0210 12:32:20.931567   78361 out.go:97] [download-only-424031] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0210 12:32:20.931756   78361 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball: no such file or directory
	I0210 12:32:20.931765   78361 notify.go:220] Checking for updates...
	I0210 12:32:20.933018   78361 out.go:169] MINIKUBE_LOCATION=20390
	I0210 12:32:20.934426   78361 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:32:20.935819   78361 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:32:20.937104   78361 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:32:20.938458   78361 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 12:32:20.941051   78361 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 12:32:20.941261   78361 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:32:20.962940   78361 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:32:20.963016   78361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:32:21.009080   78361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-10 12:32:21.000165981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:32:21.009197   78361 docker.go:318] overlay module found
	I0210 12:32:21.011011   78361 out.go:97] Using the docker driver based on user configuration
	I0210 12:32:21.011045   78361 start.go:297] selected driver: docker
	I0210 12:32:21.011061   78361 start.go:901] validating driver "docker" against <nil>
	I0210 12:32:21.011189   78361 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:32:21.059295   78361 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-10 12:32:21.050993353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:32:21.059466   78361 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:32:21.059998   78361 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0210 12:32:21.060167   78361 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 12:32:21.062154   78361 out.go:169] Using Docker driver with root privileges
	I0210 12:32:21.063395   78361 cni.go:84] Creating CNI manager for ""
	I0210 12:32:21.063472   78361 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0210 12:32:21.063486   78361 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 12:32:21.063577   78361 start.go:340] cluster config:
	{Name:download-only-424031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-424031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:32:21.064973   78361 out.go:97] Starting "download-only-424031" primary control-plane node in "download-only-424031" cluster
	I0210 12:32:21.065000   78361 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0210 12:32:21.066162   78361 out.go:97] Pulling base image v0.0.46 ...
	I0210 12:32:21.066193   78361 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0210 12:32:21.066301   78361 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
	I0210 12:32:21.082500   78361 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0210 12:32:21.082696   78361 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local cache directory
	I0210 12:32:21.082793   78361 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 to local cache
	I0210 12:32:21.099846   78361 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0210 12:32:21.099879   78361 cache.go:56] Caching tarball of preloaded images
	I0210 12:32:21.100047   78361 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0210 12:32:21.101920   78361 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0210 12:32:21.101950   78361 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0210 12:32:21.148597   78361 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0210 12:32:24.229325   78361 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 as a tarball
	I0210 12:32:25.632372   78361 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0210 12:32:25.632462   78361 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-424031 host does not exist
	  To start a cluster, run: "minikube start -p download-only-424031"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-424031
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (4.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-867318 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-867318 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.598044841s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (4.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0210 12:32:32.077351   78349 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 12:32:32.077397   78349 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20390-71607/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-867318
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-867318: exit status 85 (61.651185ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-424031 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC |                     |
	|         | -p download-only-424031        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| delete  | -p download-only-424031        | download-only-424031 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC | 10 Feb 25 12:32 UTC |
	| start   | -o=json --download-only        | download-only-867318 | jenkins | v1.35.0 | 10 Feb 25 12:32 UTC |                     |
	|         | -p download-only-867318        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:32:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:32:27.519905   78719 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:32:27.520055   78719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:32:27.520069   78719 out.go:358] Setting ErrFile to fd 2...
	I0210 12:32:27.520076   78719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:32:27.520255   78719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:32:27.520890   78719 out.go:352] Setting JSON to true
	I0210 12:32:27.521749   78719 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11697,"bootTime":1739179051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:32:27.521840   78719 start.go:139] virtualization: kvm guest
	I0210 12:32:27.524012   78719 out.go:97] [download-only-867318] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:32:27.524186   78719 notify.go:220] Checking for updates...
	I0210 12:32:27.525415   78719 out.go:169] MINIKUBE_LOCATION=20390
	I0210 12:32:27.526740   78719 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:32:27.528111   78719 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:32:27.529441   78719 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:32:27.530615   78719 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0210 12:32:27.532850   78719 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 12:32:27.533086   78719 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:32:27.554425   78719 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:32:27.554503   78719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:32:27.598919   78719 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-02-10 12:32:27.590859712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:32:27.599043   78719 docker.go:318] overlay module found
	I0210 12:32:27.600799   78719 out.go:97] Using the docker driver based on user configuration
	I0210 12:32:27.600820   78719 start.go:297] selected driver: docker
	I0210 12:32:27.600825   78719 start.go:901] validating driver "docker" against <nil>
	I0210 12:32:27.600902   78719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:32:27.644910   78719 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:48 SystemTime:2025-02-10 12:32:27.635705917 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:32:27.645075   78719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 12:32:27.645540   78719 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0210 12:32:27.645680   78719 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 12:32:27.647641   78719 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-867318 host does not exist
	  To start a cluster, run: "minikube start -p download-only-867318"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-867318
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-433372 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-433372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-433372
--- PASS: TestDownloadOnlyKic (1.09s)

                                                
                                    
x
+
TestBinaryMirror (0.75s)

                                                
                                                
=== RUN   TestBinaryMirror
I0210 12:32:33.827152   78349 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-655095 --alsologtostderr --binary-mirror http://127.0.0.1:37591 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-655095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-655095
--- PASS: TestBinaryMirror (0.75s)

                                                
                                    
x
+
TestOffline (52.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-464156 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-464156 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (50.394735055s)
helpers_test.go:175: Cleaning up "offline-containerd-464156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-464156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-464156: (2.229980647s)
--- PASS: TestOffline (52.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444927
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-444927: exit status 85 (52.215239ms)

                                                
                                                
-- stdout --
	* Profile "addons-444927" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444927"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444927
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-444927: exit status 85 (53.519605ms)

                                                
                                                
-- stdout --
	* Profile "addons-444927" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444927"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (140.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-444927 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-444927 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m20.245583952s)
--- PASS: TestAddons/Setup (140.25s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.51s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:815: volcano-admission stabilized in 13.011999ms
addons_test.go:823: volcano-controller stabilized in 13.088874ms
addons_test.go:807: volcano-scheduler stabilized in 13.167868ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-mpvxs" [d8e2cbd7-fe1c-4652-8ded-81f88606f57f] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.002739261s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-tpsmb" [58e50b3e-3ef5-4246-bc0a-494b499c83fb] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003579321s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-dpgwq" [322183a0-5746-498a-b342-7e7f131c4029] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003279991s
addons_test.go:842: (dbg) Run:  kubectl --context addons-444927 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-444927 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-444927 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [50862929-1e3b-4606-b6c3-4de8f3ee9340] Pending
helpers_test.go:344: "test-job-nginx-0" [50862929-1e3b-4606-b6c3-4de8f3ee9340] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [50862929-1e3b-4606-b6c3-4de8f3ee9340] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.002952309s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 addons disable volcano --alsologtostderr -v=1: (11.183525657s)
--- PASS: TestAddons/serial/Volcano (38.51s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-444927 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-444927 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-444927 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-444927 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3a6deae3-ff6b-411e-b03d-d969a3e89a88] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3a6deae3-ff6b-411e-b03d-d969a3e89a88] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003472946s
addons_test.go:633: (dbg) Run:  kubectl --context addons-444927 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-444927 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-444927 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.44s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.014168ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-gh4sc" [42010757-f6a0-42bd-af45-d200619f078b] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002110027s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lkxgg" [48863c7e-8f22-4c47-a211-3f269092501f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003394316s
addons_test.go:331: (dbg) Run:  kubectl --context addons-444927 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-444927 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-444927 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.917970738s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 ip
2025/02/10 12:36:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.71s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (355.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-444927 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-444927 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-444927 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7fc4a8a7-2d16-43e2-8d8d-6d7a34c198a0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 5m46.003459182s
I0210 12:41:50.974949   78349 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-444927 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 addons disable ingress --alsologtostderr -v=1: (7.631440153s)
--- PASS: TestAddons/parallel/Ingress (355.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pmldr" [94a6c984-d4f3-4e90-8702-8b6187247fc5] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003715336s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 addons disable inspektor-gadget --alsologtostderr -v=1: (5.691122649s)
--- PASS: TestAddons/parallel/InspektorGadget (10.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.065008ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-9rzwp" [18f2c184-138a-4ae6-9b10-1f55f0ffe77d] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002216749s
addons_test.go:402: (dbg) Run:  kubectl --context addons-444927 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0210 12:36:12.905544   78349 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0210 12:36:12.908641   78349 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 12:36:12.908667   78349 kapi.go:107] duration metric: took 3.138526ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.150606ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-444927 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-444927 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [21dd1c5e-5305-48bd-ac84-2cdfe58209c3] Pending
helpers_test.go:344: "task-pv-pod" [21dd1c5e-5305-48bd-ac84-2cdfe58209c3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [21dd1c5e-5305-48bd-ac84-2cdfe58209c3] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003420213s
addons_test.go:511: (dbg) Run:  kubectl --context addons-444927 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444927 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444927 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-444927 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-444927 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-444927 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444927 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-444927 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [2178f908-63fd-46d2-9b87-c46046d3780f] Pending
helpers_test.go:344: "task-pv-pod-restore" [2178f908-63fd-46d2-9b87-c46046d3780f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [2178f908-63fd-46d2-9b87-c46046d3780f] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002895094s
addons_test.go:553: (dbg) Run:  kubectl --context addons-444927 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-444927 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-444927 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.641778333s)
--- PASS: TestAddons/parallel/CSI (54.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-444927 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-gf9vc" [a90edba0-0f57-40e2-b330-7af55cb0716f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-gf9vc" [a90edba0-0f57-40e2-b330-7af55cb0716f] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003911521s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 addons disable headlamp --alsologtostderr -v=1: (5.634590706s)
--- PASS: TestAddons/parallel/Headlamp (15.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-2cfbr" [48106c6d-a08b-4d3c-83ae-bd694336b2f7] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003737989s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-h5pb4" [3346d1a2-d520-442b-8349-6a8ecaea1a6f] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003183969s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-mm4rr" [e15b0191-4f20-4dd1-9563-c720b55b7a22] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003875182s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-444927 addons disable yakd --alsologtostderr -v=1: (5.629959296s)
--- PASS: TestAddons/parallel/Yakd (10.63s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-tffg2" [cbfc6cf0-103c-44d4-85d7-bb02305be0fb] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003206504s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-444927 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.12s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-444927
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-444927: (11.867181615s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444927
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444927
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-444927
--- PASS: TestAddons/StoppedEnableDisable (12.12s)

                                                
                                    
x
+
TestCertOptions (25.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-854080 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-854080 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.914274126s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-854080 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-854080 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-854080 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-854080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-854080
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-854080: (1.973497938s)
--- PASS: TestCertOptions (25.53s)

                                                
                                    
x
+
TestCertExpiration (213.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-504430 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-504430 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.065063043s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-504430 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-504430 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (4.80261398s)
helpers_test.go:175: Cleaning up "cert-expiration-504430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-504430
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-504430: (2.408739206s)
--- PASS: TestCertExpiration (213.28s)

                                                
                                    
x
+
TestForceSystemdFlag (30.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-862552 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-862552 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (27.749303512s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-862552 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-862552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-862552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-862552: (2.190313383s)
--- PASS: TestForceSystemdFlag (30.24s)

                                                
                                    
x
+
TestForceSystemdEnv (33.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-504872 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-504872 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.389731226s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-504872 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-504872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-504872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-504872: (3.999726169s)
--- PASS: TestForceSystemdEnv (33.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0210 13:17:16.884131   78349 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 13:17:16.884273   78349 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0210 13:17:16.915722   78349 install.go:62] docker-machine-driver-kvm2: exit status 1
W0210 13:17:16.916121   78349 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 13:17:16.916185   78349 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1055374796/001/docker-machine-driver-kvm2
I0210 13:17:17.183223   78349 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1055374796/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000699e78 gz:0xc000699f00 tar:0xc000699eb0 tar.bz2:0xc000699ec0 tar.gz:0xc000699ed0 tar.xz:0xc000699ee0 tar.zst:0xc000699ef0 tbz2:0xc000699ec0 tgz:0xc000699ed0 txz:0xc000699ee0 tzst:0xc000699ef0 xz:0xc000699f08 zip:0xc000699f20 zst:0xc000699f30] Getters:map[file:0xc0018297a0 http:0xc0008cc280 https:0xc0008cc2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 13:17:17.183271   78349 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1055374796/001/docker-machine-driver-kvm2
I0210 13:17:18.895499   78349 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 13:17:18.895600   78349 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0210 13:17:18.925153   78349 install.go:137] /home/jenkins/workspace/Docker_Linux_containerd_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0210 13:17:18.925185   78349 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0210 13:17:18.925247   78349 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0210 13:17:18.925272   78349 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1055374796/002/docker-machine-driver-kvm2
I0210 13:17:19.088442   78349 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1055374796/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0 0x54825c0] Decompressors:map[bz2:0xc000699e78 gz:0xc000699f00 tar:0xc000699eb0 tar.bz2:0xc000699ec0 tar.gz:0xc000699ed0 tar.xz:0xc000699ee0 tar.zst:0xc000699ef0 tbz2:0xc000699ec0 tgz:0xc000699ed0 txz:0xc000699ee0 tzst:0xc000699ef0 xz:0xc000699f08 zip:0xc000699f20 zst:0xc000699f30] Getters:map[file:0xc000c01af0 http:0xc000752aa0 https:0xc000752af0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0210 13:17:19.088512   78349 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1055374796/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.52s)

                                                
                                    
x
+
TestErrorSpam/setup (22.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-685628 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-685628 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-685628 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-685628 --driver=docker  --container-runtime=containerd: (22.593334492s)
--- PASS: TestErrorSpam/setup (22.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 stop: (1.178574131s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-685628 --log_dir /tmp/nospam-685628 stop
--- PASS: TestErrorSpam/stop (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20390-71607/.minikube/files/etc/test/nested/copy/78349/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.65s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-644291 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-644291 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (38.648682868s)
--- PASS: TestFunctional/serial/StartWithProxy (38.65s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0210 12:44:06.558569   78349 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-644291 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-644291 --alsologtostderr -v=8: (5.261525421s)
functional_test.go:680: soft start took 5.262349865s for "functional-644291" cluster.
I0210 12:44:11.820461   78349 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-644291 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-644291 cache add registry.k8s.io/pause:3.3: (1.052491951s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-644291 /tmp/TestFunctionalserialCacheCmdcacheadd_local3531822306/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cache add minikube-local-cache-test:functional-644291
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cache delete minikube-local-cache-test:functional-644291
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-644291
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.679477ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 kubectl -- --context functional-644291 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-644291 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-644291 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0210 12:44:54.881559   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:54.889320   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:54.900702   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:54.922135   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:54.963605   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:55.045130   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:55.206645   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:55.528408   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:56.170252   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:44:57.451710   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:45:00.014267   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-644291 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.330545361s)
functional_test.go:778: restart took 43.330731099s for "functional-644291" cluster.
I0210 12:45:01.568795   78349 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (43.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-644291 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-644291 logs: (1.403457716s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 logs --file /tmp/TestFunctionalserialLogsFileCmd1997115020/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-644291 logs --file /tmp/TestFunctionalserialLogsFileCmd1997115020/001/logs.txt: (1.400825532s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-644291 apply -f testdata/invalidsvc.yaml
E0210 12:45:05.136293   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-644291
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-644291: exit status 115 (328.040779ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30935 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-644291 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 config get cpus: exit status 14 (72.867332ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 config get cpus: exit status 14 (56.971524ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-644291 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-644291 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 127571: os: process already finished
E0210 12:46:16.821725   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 12:47:38.743625   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (13.64s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-644291 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-644291 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (167.735296ms)

                                                
                                                
-- stdout --
	* [functional-644291] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:45:19.500357  124498 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:45:19.500520  124498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.500532  124498 out.go:358] Setting ErrFile to fd 2...
	I0210 12:45:19.500539  124498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.500774  124498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:45:19.501402  124498 out.go:352] Setting JSON to false
	I0210 12:45:19.502676  124498 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12469,"bootTime":1739179051,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:45:19.502775  124498 start.go:139] virtualization: kvm guest
	I0210 12:45:19.505682  124498 out.go:177] * [functional-644291] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 12:45:19.507392  124498 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:45:19.507420  124498 notify.go:220] Checking for updates...
	I0210 12:45:19.511066  124498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:45:19.512962  124498 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:45:19.514987  124498 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:45:19.516684  124498 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:45:19.518095  124498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:45:19.520078  124498 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:45:19.520569  124498 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:45:19.547682  124498 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:45:19.547782  124498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:45:19.598061  124498 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-10 12:45:19.589383999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:45:19.598183  124498 docker.go:318] overlay module found
	I0210 12:45:19.600203  124498 out.go:177] * Using the docker driver based on existing profile
	I0210 12:45:19.601589  124498 start.go:297] selected driver: docker
	I0210 12:45:19.601604  124498 start.go:901] validating driver "docker" against &{Name:functional-644291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-644291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:45:19.601703  124498 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:45:19.603645  124498 out.go:201] 
	W0210 12:45:19.605279  124498 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0210 12:45:19.606647  124498 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-644291 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-644291 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-644291 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (150.499239ms)

                                                
                                                
-- stdout --
	* [functional-644291] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:45:19.853036  124776 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:45:19.853170  124776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.853180  124776 out.go:358] Setting ErrFile to fd 2...
	I0210 12:45:19.853187  124776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:45:19.853489  124776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:45:19.854071  124776 out.go:352] Setting JSON to false
	I0210 12:45:19.855111  124776 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12469,"bootTime":1739179051,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 12:45:19.855223  124776 start.go:139] virtualization: kvm guest
	I0210 12:45:19.857439  124776 out.go:177] * [functional-644291] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0210 12:45:19.860460  124776 notify.go:220] Checking for updates...
	I0210 12:45:19.860507  124776 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 12:45:19.862100  124776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:45:19.863628  124776 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 12:45:19.864928  124776 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 12:45:19.866453  124776 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 12:45:19.867685  124776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:45:19.869275  124776 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:45:19.869778  124776 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:45:19.894412  124776 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 12:45:19.894508  124776 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:45:19.944105  124776 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:55 SystemTime:2025-02-10 12:45:19.93510259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:45:19.944255  124776 docker.go:318] overlay module found
	I0210 12:45:19.946960  124776 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0210 12:45:19.948316  124776 start.go:297] selected driver: docker
	I0210 12:45:19.948329  124776 start.go:901] validating driver "docker" against &{Name:functional-644291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-644291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:45:19.948417  124776 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:45:19.950489  124776 out.go:201] 
	W0210 12:45:19.951592  124776 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0210 12:45:19.952654  124776 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-644291 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-644291 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-2klqt" [975b4e81-87b3-4be1-9458-9db716a40f3c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-2klqt" [975b4e81-87b3-4be1-9458-9db716a40f3c] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003208152s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:31477
functional_test.go:1692: http://192.168.49.2:31477: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-2klqt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31477
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh -n functional-644291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cp functional-644291:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2013361157/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh -n functional-644291 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh -n functional-644291 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/78349/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo cat /etc/test/nested/copy/78349/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/78349.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo cat /etc/ssl/certs/78349.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/78349.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo cat /usr/share/ca-certificates/78349.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/783492.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo cat /etc/ssl/certs/783492.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/783492.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo cat /usr/share/ca-certificates/783492.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-644291 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh "sudo systemctl is-active docker": exit status 1 (272.198044ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh "sudo systemctl is-active crio": exit status 1 (258.703495ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-644291 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-644291 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-xt54x" [e5e9f27d-3e11-401f-9216-2d7173e6a515] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-xt54x" [e5e9f27d-3e11-401f-9216-2d7173e6a515] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.002801781s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-644291 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-644291 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-644291 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-644291 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 121494: os: process already finished
helpers_test.go:508: unable to kill pid 121259: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-644291 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-644291 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5e3af86f-501f-44aa-a637-9ddb97816bba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5e3af86f-501f-44aa-a637-9ddb97816bba] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003483588s
I0210 12:45:20.218647   78349 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "310.031171ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "55.848484ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "345.961428ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "53.568906ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdany-port412772154/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739191512508743274" to /tmp/TestFunctionalparallelMountCmdany-port412772154/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739191512508743274" to /tmp/TestFunctionalparallelMountCmdany-port412772154/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739191512508743274" to /tmp/TestFunctionalparallelMountCmdany-port412772154/001/test-1739191512508743274
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (297.693161ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:45:12.806768   78349 retry.go:31] will retry after 718.046393ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 10 12:45 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 10 12:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 10 12:45 test-1739191512508743274
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh cat /mount-9p/test-1739191512508743274
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-644291 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [76d02897-9760-42ee-8bcb-a923b5c80f0c] Pending
helpers_test.go:344: "busybox-mount" [76d02897-9760-42ee-8bcb-a923b5c80f0c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0210 12:45:15.377970   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [76d02897-9760-42ee-8bcb-a923b5c80f0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [76d02897-9760-42ee-8bcb-a923b5c80f0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003513881s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-644291 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdany-port412772154/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 service list -o json
functional_test.go:1511: Took "288.093665ms" to run "out/minikube-linux-amd64 -p functional-644291 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:32379
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:32379
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdspecific-port3194925710/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.571477ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:45:19.631968   78349 retry.go:31] will retry after 366.407814ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdspecific-port3194925710/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh "sudo umount -f /mount-9p": exit status 1 (270.148504ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-644291 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdspecific-port3194925710/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-644291 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.194.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-644291 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-644291 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-644291
docker.io/kindest/kindnetd:v20241212-9f82dd49
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-644291 image ls --format short --alsologtostderr:
I0210 12:45:28.997892  128105 out.go:345] Setting OutFile to fd 1 ...
I0210 12:45:28.998043  128105 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:28.998054  128105 out.go:358] Setting ErrFile to fd 2...
I0210 12:45:28.998060  128105 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:28.998328  128105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
I0210 12:45:28.999168  128105 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:28.999312  128105 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:28.999882  128105 cli_runner.go:164] Run: docker container inspect functional-644291 --format={{.State.Status}}
I0210 12:45:29.019258  128105 ssh_runner.go:195] Run: systemctl --version
I0210 12:45:29.019310  128105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644291
I0210 12:45:29.037530  128105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/functional-644291/id_rsa Username:docker}
I0210 12:45:29.128915  128105 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-644291 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/kube-proxy                  | v1.32.1            | sha256:e29f9c | 30.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/kindest/kindnetd                  | v20241212-9f82dd49 | sha256:d30084 | 39MB   |
| docker.io/library/minikube-local-cache-test | functional-644291  | sha256:ea9061 | 992B   |
| localhost/my-image                          | functional-644291  | sha256:914593 | 775kB  |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| docker.io/library/nginx                     | alpine             | sha256:d41a14 | 20.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/kube-apiserver              | v1.32.1            | sha256:95c0bd | 28.7MB |
| registry.k8s.io/etcd                        | 3.5.16-0           | sha256:a9e7e6 | 57.7MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1            | sha256:019ee1 | 26.3MB |
| registry.k8s.io/kube-scheduler              | v1.32.1            | sha256:2b0d65 | 20.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20241108-5c6d2daf | sha256:50415e | 38.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-644291 image ls --format table --alsologtostderr:
I0210 12:45:32.787255  128624 out.go:345] Setting OutFile to fd 1 ...
I0210 12:45:32.787398  128624 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:32.787409  128624 out.go:358] Setting ErrFile to fd 2...
I0210 12:45:32.787416  128624 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:32.787645  128624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
I0210 12:45:32.788260  128624 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:32.788375  128624 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:32.788809  128624 cli_runner.go:164] Run: docker container inspect functional-644291 --format={{.State.Status}}
I0210 12:45:32.806558  128624 ssh_runner.go:195] Run: systemctl --version
I0210 12:45:32.806606  128624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644291
I0210 12:45:32.824048  128624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/functional-644291/id_rsa Username:docker}
I0210 12:45:32.916713  128624 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-644291 image ls --format json --alsologtostderr:
[{"id":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"28671624"},{"id":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"26258470"},{"id":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"30908485"},{"id":"sha256:ea90611e8cbf0d186c17d77353d98949affc5a95c53bbbb1b86f41998afdac90","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-644291"],"size":"9
92"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:9145938348bdcb5ceb61b68c2197f12ce4309156f3014bd304bd2cffea7f9328","repoDigests":[],"repoTags":["localhost/my-image:functional-644291"],"size":"774888"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"20657536"},{"id":"sha256:d30
0845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56","repoDigests":["docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26"],"repoTags":["docker.io/kindest/kindnetd:v20241212-9f82dd49"],"size":"39008320"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"576805
41"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111","repoDigests":["docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef"],"repoTags":["docker.io/library/nginx:alpine"],"size":"20832755"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echose
rver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"38601118"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-644291 image ls --format json --alsologtostderr:
I0210 12:45:32.569095  128557 out.go:345] Setting OutFile to fd 1 ...
I0210 12:45:32.569385  128557 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:32.569397  128557 out.go:358] Setting ErrFile to fd 2...
I0210 12:45:32.569402  128557 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:32.569602  128557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
I0210 12:45:32.570228  128557 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:32.570333  128557 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:32.570735  128557 cli_runner.go:164] Run: docker container inspect functional-644291 --format={{.State.Status}}
I0210 12:45:32.588603  128557 ssh_runner.go:195] Run: systemctl --version
I0210 12:45:32.588666  128557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644291
I0210 12:45:32.611700  128557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/functional-644291/id_rsa Username:docker}
I0210 12:45:32.700849  128557 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-644291 image ls --format yaml --alsologtostderr:
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:d300845f67aebd4f27f549889087215f476cecdd6d9a715b49a4152857549c56
repoDigests:
- docker.io/kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26
repoTags:
- docker.io/kindest/kindnetd:v20241212-9f82dd49
size: "39008320"
- id: sha256:ea90611e8cbf0d186c17d77353d98949affc5a95c53bbbb1b86f41998afdac90
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-644291
size: "992"
- id: sha256:9145938348bdcb5ceb61b68c2197f12ce4309156f3014bd304bd2cffea7f9328
repoDigests: []
repoTags:
- localhost/my-image:functional-644291
size: "774888"
- id: sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "26258470"
- id: sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "30908485"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "28671624"
- id: sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "20657536"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111
repoDigests:
- docker.io/library/nginx@sha256:b471bb609adc83f73c2d95148cf1bd683408739a3c09c0afc666ea2af0037aef
repoTags:
- docker.io/library/nginx:alpine
size: "20832755"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "38601118"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "57680541"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-644291 image ls --format yaml --alsologtostderr:
I0210 12:45:32.359169  128507 out.go:345] Setting OutFile to fd 1 ...
I0210 12:45:32.359326  128507 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:32.359338  128507 out.go:358] Setting ErrFile to fd 2...
I0210 12:45:32.359345  128507 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:32.359540  128507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
I0210 12:45:32.360154  128507 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:32.360272  128507 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:32.360711  128507 cli_runner.go:164] Run: docker container inspect functional-644291 --format={{.State.Status}}
I0210 12:45:32.378077  128507 ssh_runner.go:195] Run: systemctl --version
I0210 12:45:32.378144  128507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644291
I0210 12:45:32.396250  128507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/functional-644291/id_rsa Username:docker}
I0210 12:45:32.484629  128507 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh pgrep buildkitd: exit status 1 (295.404451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image build -t localhost/my-image:functional-644291 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-644291 image build -t localhost/my-image:functional-644291 testdata/build --alsologtostderr: (2.604656896s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-644291 image build -t localhost/my-image:functional-644291 testdata/build --alsologtostderr:
I0210 12:45:29.544524  128246 out.go:345] Setting OutFile to fd 1 ...
I0210 12:45:29.544801  128246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:29.544807  128246 out.go:358] Setting ErrFile to fd 2...
I0210 12:45:29.544811  128246 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 12:45:29.545073  128246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
I0210 12:45:29.545860  128246 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:29.546409  128246 config.go:182] Loaded profile config "functional-644291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 12:45:29.546900  128246 cli_runner.go:164] Run: docker container inspect functional-644291 --format={{.State.Status}}
I0210 12:45:29.569126  128246 ssh_runner.go:195] Run: systemctl --version
I0210 12:45:29.569177  128246 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-644291
I0210 12:45:29.586436  128246 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/functional-644291/id_rsa Username:docker}
I0210 12:45:29.688802  128246 build_images.go:161] Building image from path: /tmp/build.292813423.tar
I0210 12:45:29.688879  128246 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0210 12:45:29.699293  128246 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.292813423.tar
I0210 12:45:29.702971  128246 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.292813423.tar: stat -c "%s %y" /var/lib/minikube/build/build.292813423.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.292813423.tar': No such file or directory
I0210 12:45:29.703008  128246 ssh_runner.go:362] scp /tmp/build.292813423.tar --> /var/lib/minikube/build/build.292813423.tar (3072 bytes)
I0210 12:45:29.731984  128246 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.292813423
I0210 12:45:29.741786  128246 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.292813423 -xf /var/lib/minikube/build/build.292813423.tar
I0210 12:45:29.751388  128246 containerd.go:394] Building image: /var/lib/minikube/build/build.292813423
I0210 12:45:29.751455  128246 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.292813423 --local dockerfile=/var/lib/minikube/build/build.292813423 --output type=image,name=localhost/my-image:functional-644291
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:eadb7c500f5126ac9be8714050ae1d6a3460c5c88e02669a7589f814e913afdf done
#8 exporting config sha256:9145938348bdcb5ceb61b68c2197f12ce4309156f3014bd304bd2cffea7f9328 0.0s done
#8 naming to localhost/my-image:functional-644291 done
#8 DONE 0.1s
I0210 12:45:32.049889  128246 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.292813423 --local dockerfile=/var/lib/minikube/build/build.292813423 --output type=image,name=localhost/my-image:functional-644291: (2.298395893s)
I0210 12:45:32.049984  128246 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.292813423
I0210 12:45:32.059224  128246 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.292813423.tar
I0210 12:45:32.087832  128246 build_images.go:217] Built localhost/my-image:functional-644291 from /tmp/build.292813423.tar
I0210 12:45:32.087870  128246 build_images.go:133] succeeded building to: functional-644291
I0210 12:45:32.087876  128246 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1838848673/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1838848673/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1838848673/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T" /mount1: exit status 1 (338.549482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0210 12:45:21.339945   78349 retry.go:31] will retry after 672.288032ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-644291 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1838848673/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1838848673/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-644291 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1838848673/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image rm kicbase/echo-server:functional-644291 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 update-context --alsologtostderr -v=2
E0210 12:45:35.859584   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
2025/02/10 12:45:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-644291 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-644291
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-644291
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-644291
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (94.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587709 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-587709 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m33.726310919s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (94.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-587709 -- rollout status deployment/busybox: (2.095876823s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-nsx9p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-t98sh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-wrlv8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-nsx9p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-t98sh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-wrlv8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-nsx9p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-t98sh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-wrlv8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-nsx9p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-nsx9p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-t98sh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-t98sh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-wrlv8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-587709 -- exec busybox-58667487b6-wrlv8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-587709 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-587709 -v=7 --alsologtostderr: (20.18211584s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-587709 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp testdata/cp-test.txt ha-587709:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1513338385/001/cp-test_ha-587709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709:/home/docker/cp-test.txt ha-587709-m02:/home/docker/cp-test_ha-587709_ha-587709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test_ha-587709_ha-587709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709:/home/docker/cp-test.txt ha-587709-m03:/home/docker/cp-test_ha-587709_ha-587709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test_ha-587709_ha-587709-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709:/home/docker/cp-test.txt ha-587709-m04:/home/docker/cp-test_ha-587709_ha-587709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test_ha-587709_ha-587709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp testdata/cp-test.txt ha-587709-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1513338385/001/cp-test_ha-587709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m02:/home/docker/cp-test.txt ha-587709:/home/docker/cp-test_ha-587709-m02_ha-587709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test_ha-587709-m02_ha-587709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m02:/home/docker/cp-test.txt ha-587709-m03:/home/docker/cp-test_ha-587709-m02_ha-587709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test_ha-587709-m02_ha-587709-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m02:/home/docker/cp-test.txt ha-587709-m04:/home/docker/cp-test_ha-587709-m02_ha-587709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test_ha-587709-m02_ha-587709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp testdata/cp-test.txt ha-587709-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1513338385/001/cp-test_ha-587709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m03:/home/docker/cp-test.txt ha-587709:/home/docker/cp-test_ha-587709-m03_ha-587709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test_ha-587709-m03_ha-587709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m03:/home/docker/cp-test.txt ha-587709-m02:/home/docker/cp-test_ha-587709-m03_ha-587709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test_ha-587709-m03_ha-587709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m03:/home/docker/cp-test.txt ha-587709-m04:/home/docker/cp-test_ha-587709-m03_ha-587709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test_ha-587709-m03_ha-587709-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp testdata/cp-test.txt ha-587709-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1513338385/001/cp-test_ha-587709-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m04:/home/docker/cp-test.txt ha-587709:/home/docker/cp-test_ha-587709-m04_ha-587709.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709 "sudo cat /home/docker/cp-test_ha-587709-m04_ha-587709.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m04:/home/docker/cp-test.txt ha-587709-m02:/home/docker/cp-test_ha-587709-m04_ha-587709-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m02 "sudo cat /home/docker/cp-test_ha-587709-m04_ha-587709-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 cp ha-587709-m04:/home/docker/cp-test.txt ha-587709-m03:/home/docker/cp-test_ha-587709-m04_ha-587709-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 ssh -n ha-587709-m03 "sudo cat /home/docker/cp-test_ha-587709-m04_ha-587709-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-587709 node stop m02 -v=7 --alsologtostderr: (11.84442163s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr: exit status 7 (658.808452ms)

                                                
                                                
-- stdout --
	ha-587709
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-587709-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587709-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-587709-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:57:58.770821  153493 out.go:345] Setting OutFile to fd 1 ...
	I0210 12:57:58.771068  153493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:57:58.771077  153493 out.go:358] Setting ErrFile to fd 2...
	I0210 12:57:58.771080  153493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:57:58.771244  153493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 12:57:58.771406  153493 out.go:352] Setting JSON to false
	I0210 12:57:58.771434  153493 mustload.go:65] Loading cluster: ha-587709
	I0210 12:57:58.771567  153493 notify.go:220] Checking for updates...
	I0210 12:57:58.771820  153493 config.go:182] Loaded profile config "ha-587709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 12:57:58.771839  153493 status.go:174] checking status of ha-587709 ...
	I0210 12:57:58.772266  153493 cli_runner.go:164] Run: docker container inspect ha-587709 --format={{.State.Status}}
	I0210 12:57:58.791437  153493 status.go:371] ha-587709 host status = "Running" (err=<nil>)
	I0210 12:57:58.791462  153493 host.go:66] Checking if "ha-587709" exists ...
	I0210 12:57:58.791700  153493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-587709
	I0210 12:57:58.809455  153493 host.go:66] Checking if "ha-587709" exists ...
	I0210 12:57:58.809710  153493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:57:58.809757  153493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-587709
	I0210 12:57:58.827186  153493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/ha-587709/id_rsa Username:docker}
	I0210 12:57:58.917767  153493 ssh_runner.go:195] Run: systemctl --version
	I0210 12:57:58.921813  153493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:57:58.932285  153493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 12:57:58.982772  153493 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:74 SystemTime:2025-02-10 12:57:58.973482582 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 12:57:58.983404  153493 kubeconfig.go:125] found "ha-587709" server: "https://192.168.49.254:8443"
	I0210 12:57:58.983444  153493 api_server.go:166] Checking apiserver status ...
	I0210 12:57:58.983487  153493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:57:58.994213  153493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	I0210 12:57:59.003309  153493 api_server.go:182] apiserver freezer: "9:freezer:/docker/dc9c81226528b5d97b2cd9a773ad76fbab3294691c9b206f90df173e19164dd9/kubepods/burstable/pod8ba0441227ef7a835539725b5936bd9a/36244e545dd73818f6f22282c79d903d3539881a0878f5db2bad6843cf27bd8d"
	I0210 12:57:59.003380  153493 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dc9c81226528b5d97b2cd9a773ad76fbab3294691c9b206f90df173e19164dd9/kubepods/burstable/pod8ba0441227ef7a835539725b5936bd9a/36244e545dd73818f6f22282c79d903d3539881a0878f5db2bad6843cf27bd8d/freezer.state
	I0210 12:57:59.011191  153493 api_server.go:204] freezer state: "THAWED"
	I0210 12:57:59.011227  153493 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0210 12:57:59.016651  153493 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0210 12:57:59.016677  153493 status.go:463] ha-587709 apiserver status = Running (err=<nil>)
	I0210 12:57:59.016689  153493 status.go:176] ha-587709 status: &{Name:ha-587709 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:57:59.016744  153493 status.go:174] checking status of ha-587709-m02 ...
	I0210 12:57:59.017004  153493 cli_runner.go:164] Run: docker container inspect ha-587709-m02 --format={{.State.Status}}
	I0210 12:57:59.036782  153493 status.go:371] ha-587709-m02 host status = "Stopped" (err=<nil>)
	I0210 12:57:59.036806  153493 status.go:384] host is not running, skipping remaining checks
	I0210 12:57:59.036813  153493 status.go:176] ha-587709-m02 status: &{Name:ha-587709-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:57:59.036841  153493 status.go:174] checking status of ha-587709-m03 ...
	I0210 12:57:59.037113  153493 cli_runner.go:164] Run: docker container inspect ha-587709-m03 --format={{.State.Status}}
	I0210 12:57:59.055772  153493 status.go:371] ha-587709-m03 host status = "Running" (err=<nil>)
	I0210 12:57:59.055799  153493 host.go:66] Checking if "ha-587709-m03" exists ...
	I0210 12:57:59.056076  153493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-587709-m03
	I0210 12:57:59.073585  153493 host.go:66] Checking if "ha-587709-m03" exists ...
	I0210 12:57:59.073849  153493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:57:59.073887  153493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-587709-m03
	I0210 12:57:59.091538  153493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/ha-587709-m03/id_rsa Username:docker}
	I0210 12:57:59.181545  153493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:57:59.192688  153493 kubeconfig.go:125] found "ha-587709" server: "https://192.168.49.254:8443"
	I0210 12:57:59.192721  153493 api_server.go:166] Checking apiserver status ...
	I0210 12:57:59.192761  153493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:57:59.202579  153493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1434/cgroup
	I0210 12:57:59.211583  153493 api_server.go:182] apiserver freezer: "9:freezer:/docker/218a0b2dc38f624c8804fb78b5b7ab4aff1ae3817426a309436147a5aca7b520/kubepods/burstable/pod18ce932f3ed8f8d566cb914f3cc6e553/785d63fab3e0189ba6ff0a278b1756958461b054ea938e130a33e9e830b2caf4"
	I0210 12:57:59.211655  153493 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/218a0b2dc38f624c8804fb78b5b7ab4aff1ae3817426a309436147a5aca7b520/kubepods/burstable/pod18ce932f3ed8f8d566cb914f3cc6e553/785d63fab3e0189ba6ff0a278b1756958461b054ea938e130a33e9e830b2caf4/freezer.state
	I0210 12:57:59.219608  153493 api_server.go:204] freezer state: "THAWED"
	I0210 12:57:59.219637  153493 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0210 12:57:59.223504  153493 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0210 12:57:59.223529  153493 status.go:463] ha-587709-m03 apiserver status = Running (err=<nil>)
	I0210 12:57:59.223537  153493 status.go:176] ha-587709-m03 status: &{Name:ha-587709-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:57:59.223556  153493 status.go:174] checking status of ha-587709-m04 ...
	I0210 12:57:59.223830  153493 cli_runner.go:164] Run: docker container inspect ha-587709-m04 --format={{.State.Status}}
	I0210 12:57:59.240834  153493 status.go:371] ha-587709-m04 host status = "Running" (err=<nil>)
	I0210 12:57:59.240861  153493 host.go:66] Checking if "ha-587709-m04" exists ...
	I0210 12:57:59.241102  153493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-587709-m04
	I0210 12:57:59.258055  153493 host.go:66] Checking if "ha-587709-m04" exists ...
	I0210 12:57:59.258324  153493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:57:59.258361  153493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-587709-m04
	I0210 12:57:59.275544  153493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/ha-587709-m04/id_rsa Username:docker}
	I0210 12:57:59.369251  153493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:57:59.379589  153493 status.go:176] ha-587709-m04 status: &{Name:ha-587709-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-587709 node start m02 -v=7 --alsologtostderr: (14.426076688s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-587709 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-587709 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-587709 -v=7 --alsologtostderr: (36.687798407s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587709 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-587709 --wait=true -v=7 --alsologtostderr: (1m0.938479431s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-587709
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 node delete m03 -v=7 --alsologtostderr
E0210 12:59:54.876199   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-587709 node delete m03 -v=7 --alsologtostderr: (8.394863578s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 stop -v=7 --alsologtostderr
E0210 13:00:08.508666   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:08.515089   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:08.526520   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:08.548054   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:08.589474   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:08.670930   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:08.832556   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:09.154199   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:09.795650   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:11.077773   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:13.640626   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:18.762567   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:00:29.003981   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-587709 stop -v=7 --alsologtostderr: (35.53342224s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr: exit status 7 (109.491269ms)

                                                
                                                
-- stdout --
	ha-587709
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587709-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-587709-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:00:39.423193  170166 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:00:39.423318  170166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:00:39.423327  170166 out.go:358] Setting ErrFile to fd 2...
	I0210 13:00:39.423332  170166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:00:39.423511  170166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 13:00:39.423705  170166 out.go:352] Setting JSON to false
	I0210 13:00:39.423737  170166 mustload.go:65] Loading cluster: ha-587709
	I0210 13:00:39.423793  170166 notify.go:220] Checking for updates...
	I0210 13:00:39.424175  170166 config.go:182] Loaded profile config "ha-587709": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 13:00:39.424194  170166 status.go:174] checking status of ha-587709 ...
	I0210 13:00:39.424626  170166 cli_runner.go:164] Run: docker container inspect ha-587709 --format={{.State.Status}}
	I0210 13:00:39.444763  170166 status.go:371] ha-587709 host status = "Stopped" (err=<nil>)
	I0210 13:00:39.444877  170166 status.go:384] host is not running, skipping remaining checks
	I0210 13:00:39.444887  170166 status.go:176] ha-587709 status: &{Name:ha-587709 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:00:39.444926  170166 status.go:174] checking status of ha-587709-m02 ...
	I0210 13:00:39.445204  170166 cli_runner.go:164] Run: docker container inspect ha-587709-m02 --format={{.State.Status}}
	I0210 13:00:39.463654  170166 status.go:371] ha-587709-m02 host status = "Stopped" (err=<nil>)
	I0210 13:00:39.463679  170166 status.go:384] host is not running, skipping remaining checks
	I0210 13:00:39.463687  170166 status.go:176] ha-587709-m02 status: &{Name:ha-587709-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:00:39.463721  170166 status.go:174] checking status of ha-587709-m04 ...
	I0210 13:00:39.463980  170166 cli_runner.go:164] Run: docker container inspect ha-587709-m04 --format={{.State.Status}}
	I0210 13:00:39.482030  170166 status.go:371] ha-587709-m04 host status = "Stopped" (err=<nil>)
	I0210 13:00:39.482083  170166 status.go:384] host is not running, skipping remaining checks
	I0210 13:00:39.482092  170166 status.go:176] ha-587709-m04 status: &{Name:ha-587709-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (83.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-587709 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0210 13:00:49.485720   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:01:17.947807   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:01:30.447064   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-587709 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m23.175516013s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (83.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-587709 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-587709 --control-plane -v=7 --alsologtostderr: (35.823639297s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-587709 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-485088 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0210 13:02:52.368629   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-485088 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (54.891554044s)
--- PASS: TestJSONOutput/start/Command (54.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-485088 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-485088 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.61s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-485088 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-485088 --output=json --user=testUser: (5.614280182s)
--- PASS: TestJSONOutput/stop/Command (5.61s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-302143 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-302143 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (66.96596ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4ae83cf8-906e-4922-aa9c-ab0a94322150","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-302143] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4ef85f34-fc94-4a90-ac62-95d23f6e23aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20390"}}
	{"specversion":"1.0","id":"13fb5b7d-660e-49e2-859f-7b78f039149b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f8ca8df5-050e-4ef4-8143-cf6863327f36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig"}}
	{"specversion":"1.0","id":"6676f5b2-5ac9-4c06-96bc-51318fdbb687","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube"}}
	{"specversion":"1.0","id":"fe2a9fc4-85e6-4a92-a4df-95c281d47846","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6ac47c12-7459-4881-977c-e316e02ec360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f63f629-47ad-4775-b16f-29dde4b759a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-302143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-302143
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.03s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-175770 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-175770 --network=: (25.033947341s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-175770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-175770
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-175770: (1.978118147s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.03s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.83s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-691894 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-691894 --network=bridge: (20.909887194s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-691894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-691894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-691894: (1.896820919s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.83s)

                                                
                                    
x
+
TestKicExistingNetwork (25.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0210 13:04:45.202871   78349 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0210 13:04:45.219362   78349 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0210 13:04:45.219452   78349 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0210 13:04:45.219484   78349 cli_runner.go:164] Run: docker network inspect existing-network
W0210 13:04:45.234929   78349 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0210 13:04:45.234966   78349 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0210 13:04:45.234982   78349 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0210 13:04:45.235132   78349 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0210 13:04:45.251463   78349 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-128157597f39 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:06:3d:3e:97} reservation:<nil>}
I0210 13:04:45.252006   78349 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001825fa0}
I0210 13:04:45.252041   78349 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0210 13:04:45.252094   78349 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0210 13:04:45.312501   78349 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-525795 --network=existing-network
E0210 13:04:54.880677   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:05:08.508519   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-525795 --network=existing-network: (23.352998012s)
helpers_test.go:175: Cleaning up "existing-network-525795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-525795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-525795: (1.515894827s)
I0210 13:05:10.198821   78349 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.01s)

                                                
                                    
x
+
TestKicCustomSubnet (26.16s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-126488 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-126488 --subnet=192.168.60.0/24: (24.064291966s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-126488 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-126488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-126488
E0210 13:05:36.210460   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-126488: (2.07722713s)
--- PASS: TestKicCustomSubnet (26.16s)

                                                
                                    
x
+
TestKicStaticIP (24.27s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-266664 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-266664 --static-ip=192.168.200.200: (22.173910711s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-266664 ip
helpers_test.go:175: Cleaning up "static-ip-266664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-266664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-266664: (1.972415409s)
--- PASS: TestKicStaticIP (24.27s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (46.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-786125 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-786125 --driver=docker  --container-runtime=containerd: (20.361762754s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-806416 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-806416 --driver=docker  --container-runtime=containerd: (21.440251953s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-786125
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-806416
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-806416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-806416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-806416: (1.867700862s)
helpers_test.go:175: Cleaning up "first-786125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-786125
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-786125: (1.866741613s)
--- PASS: TestMinikubeProfile (46.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-984841 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-984841 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.300905274s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-984841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-001739 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-001739 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.796847012s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-001739 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-984841 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-984841 --alsologtostderr -v=5: (1.613645905s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-001739 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-001739
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-001739: (1.175757042s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-001739
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-001739: (5.681755934s)
--- PASS: TestMountStart/serial/RestartStopped (6.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-001739 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315469 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m7.101579426s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-315469 -- rollout status deployment/busybox: (15.933304984s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-68xfj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-ppsm6 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-68xfj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-ppsm6 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-68xfj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-ppsm6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-68xfj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-68xfj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-ppsm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315469 -- exec busybox-58667487b6-ppsm6 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-315469 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-315469 -v 3 --alsologtostderr: (16.748149799s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-315469 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp testdata/cp-test.txt multinode-315469:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2821202090/001/cp-test_multinode-315469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469:/home/docker/cp-test.txt multinode-315469-m02:/home/docker/cp-test_multinode-315469_multinode-315469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m02 "sudo cat /home/docker/cp-test_multinode-315469_multinode-315469-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469:/home/docker/cp-test.txt multinode-315469-m03:/home/docker/cp-test_multinode-315469_multinode-315469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m03 "sudo cat /home/docker/cp-test_multinode-315469_multinode-315469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp testdata/cp-test.txt multinode-315469-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2821202090/001/cp-test_multinode-315469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469-m02:/home/docker/cp-test.txt multinode-315469:/home/docker/cp-test_multinode-315469-m02_multinode-315469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469 "sudo cat /home/docker/cp-test_multinode-315469-m02_multinode-315469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469-m02:/home/docker/cp-test.txt multinode-315469-m03:/home/docker/cp-test_multinode-315469-m02_multinode-315469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m03 "sudo cat /home/docker/cp-test_multinode-315469-m02_multinode-315469-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp testdata/cp-test.txt multinode-315469-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2821202090/001/cp-test_multinode-315469-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469-m03:/home/docker/cp-test.txt multinode-315469:/home/docker/cp-test_multinode-315469-m03_multinode-315469.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469 "sudo cat /home/docker/cp-test_multinode-315469-m03_multinode-315469.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 cp multinode-315469-m03:/home/docker/cp-test.txt multinode-315469-m02:/home/docker/cp-test_multinode-315469-m03_multinode-315469-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 ssh -n multinode-315469-m02 "sudo cat /home/docker/cp-test_multinode-315469-m03_multinode-315469-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-315469 node stop m03: (1.177590349s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315469 status: exit status 7 (457.294344ms)

                                                
                                                
-- stdout --
	multinode-315469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-315469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-315469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr: exit status 7 (456.680407ms)

                                                
                                                
-- stdout --
	multinode-315469
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-315469-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-315469-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:09:06.938504  235211 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:09:06.938627  235211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:09:06.938637  235211 out.go:358] Setting ErrFile to fd 2...
	I0210 13:09:06.938641  235211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:09:06.938836  235211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 13:09:06.938992  235211 out.go:352] Setting JSON to false
	I0210 13:09:06.939017  235211 mustload.go:65] Loading cluster: multinode-315469
	I0210 13:09:06.939122  235211 notify.go:220] Checking for updates...
	I0210 13:09:06.939416  235211 config.go:182] Loaded profile config "multinode-315469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 13:09:06.939435  235211 status.go:174] checking status of multinode-315469 ...
	I0210 13:09:06.939876  235211 cli_runner.go:164] Run: docker container inspect multinode-315469 --format={{.State.Status}}
	I0210 13:09:06.957819  235211 status.go:371] multinode-315469 host status = "Running" (err=<nil>)
	I0210 13:09:06.957846  235211 host.go:66] Checking if "multinode-315469" exists ...
	I0210 13:09:06.958081  235211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-315469
	I0210 13:09:06.974927  235211 host.go:66] Checking if "multinode-315469" exists ...
	I0210 13:09:06.975183  235211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 13:09:06.975234  235211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-315469
	I0210 13:09:06.992108  235211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/multinode-315469/id_rsa Username:docker}
	I0210 13:09:07.085495  235211 ssh_runner.go:195] Run: systemctl --version
	I0210 13:09:07.089425  235211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:09:07.100032  235211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 13:09:07.145053  235211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:64 SystemTime:2025-02-10 13:09:07.136579072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 13:09:07.145590  235211 kubeconfig.go:125] found "multinode-315469" server: "https://192.168.67.2:8443"
	I0210 13:09:07.145623  235211 api_server.go:166] Checking apiserver status ...
	I0210 13:09:07.145656  235211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 13:09:07.155968  235211 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1483/cgroup
	I0210 13:09:07.164359  235211 api_server.go:182] apiserver freezer: "9:freezer:/docker/3009532c65286e35d3c1607e995505d20353e13359324eb287512b7d3603bab6/kubepods/burstable/podc01f05b27d9604ed8d0be321b093227a/29c4c5b4e5708793a2a62c7d938d8876d216f2c0a7fc466def3d2e614b6ef0e4"
	I0210 13:09:07.164425  235211 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3009532c65286e35d3c1607e995505d20353e13359324eb287512b7d3603bab6/kubepods/burstable/podc01f05b27d9604ed8d0be321b093227a/29c4c5b4e5708793a2a62c7d938d8876d216f2c0a7fc466def3d2e614b6ef0e4/freezer.state
	I0210 13:09:07.172606  235211 api_server.go:204] freezer state: "THAWED"
	I0210 13:09:07.172632  235211 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0210 13:09:07.177529  235211 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0210 13:09:07.177556  235211 status.go:463] multinode-315469 apiserver status = Running (err=<nil>)
	I0210 13:09:07.177565  235211 status.go:176] multinode-315469 status: &{Name:multinode-315469 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:09:07.177587  235211 status.go:174] checking status of multinode-315469-m02 ...
	I0210 13:09:07.177842  235211 cli_runner.go:164] Run: docker container inspect multinode-315469-m02 --format={{.State.Status}}
	I0210 13:09:07.194934  235211 status.go:371] multinode-315469-m02 host status = "Running" (err=<nil>)
	I0210 13:09:07.194958  235211 host.go:66] Checking if "multinode-315469-m02" exists ...
	I0210 13:09:07.195260  235211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-315469-m02
	I0210 13:09:07.212419  235211 host.go:66] Checking if "multinode-315469-m02" exists ...
	I0210 13:09:07.212705  235211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 13:09:07.212740  235211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-315469-m02
	I0210 13:09:07.230001  235211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/20390-71607/.minikube/machines/multinode-315469-m02/id_rsa Username:docker}
	I0210 13:09:07.317264  235211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 13:09:07.327519  235211 status.go:176] multinode-315469-m02 status: &{Name:multinode-315469-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:09:07.327557  235211 status.go:174] checking status of multinode-315469-m03 ...
	I0210 13:09:07.327839  235211 cli_runner.go:164] Run: docker container inspect multinode-315469-m03 --format={{.State.Status}}
	I0210 13:09:07.345159  235211 status.go:371] multinode-315469-m03 host status = "Stopped" (err=<nil>)
	I0210 13:09:07.345181  235211 status.go:384] host is not running, skipping remaining checks
	I0210 13:09:07.345195  235211 status.go:176] multinode-315469-m03 status: &{Name:multinode-315469-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-315469 node start m03 -v=7 --alsologtostderr: (7.939722238s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-315469
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-315469
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-315469: (24.728958761s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315469 --wait=true -v=8 --alsologtostderr
E0210 13:09:54.876657   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:10:08.508860   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315469 --wait=true -v=8 --alsologtostderr: (53.681743196s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-315469
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-315469 node delete m03: (4.390545116s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-315469 stop: (23.652155021s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315469 status: exit status 7 (89.348944ms)

                                                
                                                
-- stdout --
	multinode-315469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-315469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr: exit status 7 (87.064808ms)

                                                
                                                
-- stdout --
	multinode-315469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-315469-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:11:03.191079  244862 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:11:03.191209  244862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:11:03.191222  244862 out.go:358] Setting ErrFile to fd 2...
	I0210 13:11:03.191228  244862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:11:03.191422  244862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 13:11:03.191595  244862 out.go:352] Setting JSON to false
	I0210 13:11:03.191621  244862 mustload.go:65] Loading cluster: multinode-315469
	I0210 13:11:03.191767  244862 notify.go:220] Checking for updates...
	I0210 13:11:03.192045  244862 config.go:182] Loaded profile config "multinode-315469": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 13:11:03.192064  244862 status.go:174] checking status of multinode-315469 ...
	I0210 13:11:03.192526  244862 cli_runner.go:164] Run: docker container inspect multinode-315469 --format={{.State.Status}}
	I0210 13:11:03.210923  244862 status.go:371] multinode-315469 host status = "Stopped" (err=<nil>)
	I0210 13:11:03.210945  244862 status.go:384] host is not running, skipping remaining checks
	I0210 13:11:03.210952  244862 status.go:176] multinode-315469 status: &{Name:multinode-315469 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 13:11:03.210978  244862 status.go:174] checking status of multinode-315469-m02 ...
	I0210 13:11:03.211250  244862 cli_runner.go:164] Run: docker container inspect multinode-315469-m02 --format={{.State.Status}}
	I0210 13:11:03.229633  244862 status.go:371] multinode-315469-m02 host status = "Stopped" (err=<nil>)
	I0210 13:11:03.229709  244862 status.go:384] host is not running, skipping remaining checks
	I0210 13:11:03.229722  244862 status.go:176] multinode-315469-m02 status: &{Name:multinode-315469-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315469 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315469 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.551627023s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315469 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-315469
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315469-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-315469-m02 --driver=docker  --container-runtime=containerd: exit status 14 (66.633243ms)

                                                
                                                
-- stdout --
	* [multinode-315469-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-315469-m02' is duplicated with machine name 'multinode-315469-m02' in profile 'multinode-315469'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315469-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315469-m03 --driver=docker  --container-runtime=containerd: (20.116591308s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-315469
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-315469: exit status 80 (271.158065ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-315469 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-315469-m03 already exists in multinode-315469-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-315469-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-315469-m03: (1.851165405s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.36s)

                                                
                                    
x
+
TestPreload (91.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-251344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-251344 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m0.635750566s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-251344 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-251344
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-251344: (11.876081638s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-251344 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-251344 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (15.327327433s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-251344 image list
helpers_test.go:175: Cleaning up "test-preload-251344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-251344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-251344: (2.274712249s)
--- PASS: TestPreload (91.12s)

                                                
                                    
x
+
TestScheduledStopUnix (96.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-002032 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-002032 --memory=2048 --driver=docker  --container-runtime=containerd: (20.070282061s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-002032 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-002032 -n scheduled-stop-002032
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-002032 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0210 13:14:09.057861   78349 retry.go:31] will retry after 127.293µs: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.059055   78349 retry.go:31] will retry after 125.036µs: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.060191   78349 retry.go:31] will retry after 322.514µs: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.061331   78349 retry.go:31] will retry after 179.764µs: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.062471   78349 retry.go:31] will retry after 720.826µs: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.063603   78349 retry.go:31] will retry after 1.091373ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.065807   78349 retry.go:31] will retry after 1.51429ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.068027   78349 retry.go:31] will retry after 1.490795ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.070227   78349 retry.go:31] will retry after 1.477712ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.072439   78349 retry.go:31] will retry after 4.751174ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.077693   78349 retry.go:31] will retry after 6.038404ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.083981   78349 retry.go:31] will retry after 11.598562ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.096252   78349 retry.go:31] will retry after 8.009013ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.104509   78349 retry.go:31] will retry after 25.894019ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
I0210 13:14:09.130758   78349 retry.go:31] will retry after 41.835738ms: open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/scheduled-stop-002032/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-002032 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-002032 -n scheduled-stop-002032
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-002032
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-002032 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0210 13:14:54.879209   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
E0210 13:15:08.510796   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-002032
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-002032: exit status 7 (68.46232ms)

                                                
                                                
-- stdout --
	scheduled-stop-002032
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-002032 -n scheduled-stop-002032
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-002032 -n scheduled-stop-002032: exit status 7 (70.04295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-002032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-002032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-002032: (5.247405991s)
--- PASS: TestScheduledStopUnix (96.66s)

                                                
                                    
x
+
TestInsufficientStorage (12.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-558047 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-558047 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.000659633s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1e4f2b4-d17c-4a05-a44c-eafef500a114","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-558047] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"801743bc-daf7-4c2a-a40f-1aadd5235cf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20390"}}
	{"specversion":"1.0","id":"7597964c-1125-4f15-9750-1aa517946fd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a9493ee6-2417-4763-aa2a-7c3092d64b12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig"}}
	{"specversion":"1.0","id":"d40876a6-245f-4f17-a4b9-6bdacaaf00e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube"}}
	{"specversion":"1.0","id":"f153cc90-c575-4974-a12d-08fc515d1a15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"692c0181-d1b3-45ae-9c60-fc3c27d87be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f846ca6a-c935-44fd-a8ab-4b43d4d8de05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4058c13a-1528-43b2-9381-e55f7b7af13b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"eebfb2fe-62d8-419c-bdbc-7450feb819df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"22fb7dee-1602-4400-9627-b9caf7760bfa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"32970f79-8130-476c-a2f4-2815849d8e79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-558047\" primary control-plane node in \"insufficient-storage-558047\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"204aba65-794a-4d42-aba7-23a393202bf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"de9d2fd6-2cb2-4aa9-bc7f-643219abddcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fee61db4-67b4-42ca-a131-c3ba9a655bbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-558047 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-558047 --output=json --layout=cluster: exit status 7 (267.149615ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-558047","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-558047","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:15:35.498564  267696 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-558047" does not appear in /home/jenkins/minikube-integration/20390-71607/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-558047 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-558047 --output=json --layout=cluster: exit status 7 (255.553091ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-558047","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-558047","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0210 13:15:35.754636  267814 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-558047" does not appear in /home/jenkins/minikube-integration/20390-71607/kubeconfig
	E0210 13:15:35.764625  267814 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/insufficient-storage-558047/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-558047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-558047
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-558047: (1.830314276s)
--- PASS: TestInsufficientStorage (12.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2763671947 start -p running-upgrade-283983 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2763671947 start -p running-upgrade-283983 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (25.046842711s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-283983 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-283983 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.182361263s)
helpers_test.go:175: Cleaning up "running-upgrade-283983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-283983
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-283983: (2.313904053s)
--- PASS: TestRunningBinaryUpgrade (63.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (320.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.680122151s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-387020
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-387020: (1.183934925s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-387020 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-387020 status --format={{.Host}}: exit status 7 (65.948545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m25.325126s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-387020 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (70.453328ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-387020] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-387020
	    minikube start -p kubernetes-upgrade-387020 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3870202 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-387020 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-387020 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.41313858s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-387020" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-387020
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-387020: (2.325363152s)
--- PASS: TestKubernetesUpgrade (320.13s)

                                                
                                    
x
+
TestMissingContainerUpgrade (107.24s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3408773241 start -p missing-upgrade-347928 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3408773241 start -p missing-upgrade-347928 --memory=2200 --driver=docker  --container-runtime=containerd: (49.193659434s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-347928
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-347928: (10.248703024s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-347928
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-347928 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-347928 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (45.153426665s)
helpers_test.go:175: Cleaning up "missing-upgrade-347928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-347928
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-347928: (1.974871163s)
--- PASS: TestMissingContainerUpgrade (107.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480745 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-480745 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (88.508945ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-480745] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480745 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-480745 --driver=docker  --container-runtime=containerd: (37.283902866s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-480745 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (132.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2055457139 start -p stopped-upgrade-519710 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2055457139 start -p stopped-upgrade-519710 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m14.075859937s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2055457139 -p stopped-upgrade-519710 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2055457139 -p stopped-upgrade-519710 stop: (19.915738275s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-519710 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-519710 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.891456636s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (132.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480745 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-480745 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.917587672s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-480745 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-480745 status -o json: exit status 2 (276.726174ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-480745","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-480745
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-480745: (1.873484968s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480745 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-480745 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.200602454s)
--- PASS: TestNoKubernetes/serial/Start (4.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-480745 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-480745 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.047173ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.730253406s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-480745
E0210 13:16:31.572359   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-480745: (1.492097151s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-480745 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-480745 --driver=docker  --container-runtime=containerd: (6.401778117s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-480745 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-480745 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.873653ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-127768 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-127768 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (148.279068ms)

                                                
                                                
-- stdout --
	* [false-127768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20390
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 13:17:10.283305  299628 out.go:345] Setting OutFile to fd 1 ...
	I0210 13:17:10.283571  299628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:17:10.283582  299628 out.go:358] Setting ErrFile to fd 2...
	I0210 13:17:10.283587  299628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 13:17:10.283829  299628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20390-71607/.minikube/bin
	I0210 13:17:10.284509  299628 out.go:352] Setting JSON to false
	I0210 13:17:10.285762  299628 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":14379,"bootTime":1739179051,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0210 13:17:10.285833  299628 start.go:139] virtualization: kvm guest
	I0210 13:17:10.288047  299628 out.go:177] * [false-127768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0210 13:17:10.289842  299628 notify.go:220] Checking for updates...
	I0210 13:17:10.291374  299628 out.go:177]   - MINIKUBE_LOCATION=20390
	I0210 13:17:10.293014  299628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 13:17:10.294491  299628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20390-71607/kubeconfig
	I0210 13:17:10.295894  299628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20390-71607/.minikube
	I0210 13:17:10.297146  299628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0210 13:17:10.298571  299628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 13:17:10.300622  299628 config.go:182] Loaded profile config "cert-expiration-504430": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
	I0210 13:17:10.300788  299628 config.go:182] Loaded profile config "running-upgrade-283983": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0210 13:17:10.300948  299628 config.go:182] Loaded profile config "stopped-upgrade-519710": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0210 13:17:10.301078  299628 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 13:17:10.324205  299628 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
	I0210 13:17:10.324307  299628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0210 13:17:10.373932  299628 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:78 SystemTime:2025-02-10 13:17:10.364591615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647992832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0210 13:17:10.374046  299628 docker.go:318] overlay module found
	I0210 13:17:10.376047  299628 out.go:177] * Using the docker driver based on user configuration
	I0210 13:17:10.377481  299628 start.go:297] selected driver: docker
	I0210 13:17:10.377504  299628 start.go:901] validating driver "docker" against <nil>
	I0210 13:17:10.377518  299628 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 13:17:10.380018  299628 out.go:201] 
	W0210 13:17:10.381404  299628 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0210 13:17:10.382591  299628 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-127768 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-127768" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 13:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-504430
contexts:
- context:
cluster: cert-expiration-504430
extensions:
- extension:
last-update: Mon, 10 Feb 2025 13:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-504430
name: cert-expiration-504430
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-504430
user:
client-certificate: /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/cert-expiration-504430/client.crt
client-key: /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/cert-expiration-504430/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-127768

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127768"

                                                
                                                
----------------------- debugLogs end: false-127768 [took: 2.977308964s] --------------------------------
helpers_test.go:175: Cleaning up "false-127768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-127768
--- PASS: TestNetworkPlugins/group/false (3.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-519710
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-519710: (1.173045048s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/Start (45.66s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-970189 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0210 13:17:57.949588   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-970189 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (45.663096855s)
--- PASS: TestPause/serial/Start (45.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.04s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-970189 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-970189 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.033877844s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.04s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-970189 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-970189 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-970189 --output=json --layout=cluster: exit status 2 (289.503234ms)

                                                
                                                
-- stdout --
	{"Name":"pause-970189","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-970189","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-970189 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-970189 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (6s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-970189 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-970189 --alsologtostderr -v=5: (6.004623962s)
--- PASS: TestPause/serial/DeletePaused (6.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (17.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (17.191794564s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-970189
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-970189: exit status 1 (16.293302ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-970189: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (17.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-567589 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-567589 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m13.163629443s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-233558 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0210 13:19:54.877073   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/addons-444927/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-233558 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (1m2.206426722s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-755261 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
E0210 13:20:08.508578   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-755261 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (42.386269818s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-233558 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [520ee4b3-e53c-4259-8641-36a5422fdb48] Pending
helpers_test.go:344: "busybox" [520ee4b3-e53c-4259-8641-36a5422fdb48] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [520ee4b3-e53c-4259-8641-36a5422fdb48] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003487183s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-233558 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-233558 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-233558 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-233558 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-233558 --alsologtostderr -v=3: (12.01898514s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233558 -n no-preload-233558
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233558 -n no-preload-233558: exit status 7 (67.66796ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-233558 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (285.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-233558 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-233558 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m45.067519498s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-233558 -n no-preload-233558
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (285.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-755261 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [602097cb-fe99-4558-9ce5-238e6f6d1350] Pending
helpers_test.go:344: "busybox" [602097cb-fe99-4558-9ce5-238e6f6d1350] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [602097cb-fe99-4558-9ce5-238e6f6d1350] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.003804254s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-755261 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-755261 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-755261 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-755261 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-755261 --alsologtostderr -v=3: (11.978427971s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-755261 -n embed-certs-755261
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-755261 -n embed-certs-755261: exit status 7 (68.621825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-755261 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-755261 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-755261 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m22.368837534s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-755261 -n embed-certs-755261
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-567589 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5794ae90-941a-487b-a6e8-97fab1889ae0] Pending
helpers_test.go:344: "busybox" [5794ae90-941a-487b-a6e8-97fab1889ae0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5794ae90-941a-487b-a6e8-97fab1889ae0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.002817052s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-567589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-567589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-567589 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-567589 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-567589 --alsologtostderr -v=3: (11.975114153s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567589 -n old-k8s-version-567589
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567589 -n old-k8s-version-567589: exit status 7 (73.361786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-567589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (26.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-567589 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-567589 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (26.626997364s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-567589 -n old-k8s-version-567589
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (26.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (22.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q8xch" [82a2bf7c-f91f-476d-9dd2-61171545a791] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q8xch" [82a2bf7c-f91f-476d-9dd2-61171545a791] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.004049678s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (22.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-q8xch" [82a2bf7c-f91f-476d-9dd2-61171545a791] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003532234s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-567589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-567589 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-567589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567589 -n old-k8s-version-567589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567589 -n old-k8s-version-567589: exit status 2 (309.663808ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-567589 -n old-k8s-version-567589
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-567589 -n old-k8s-version-567589: exit status 2 (288.249281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-567589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-567589 -n old-k8s-version-567589
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-567589 -n old-k8s-version-567589
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-679648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-679648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (44.794831181s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-477771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-477771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (25.55790755s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-679648 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0178a368-b8c8-478b-9d81-34d4883c3080] Pending
helpers_test.go:344: "busybox" [0178a368-b8c8-478b-9d81-34d4883c3080] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0178a368-b8c8-478b-9d81-34d4883c3080] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.00374674s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-679648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-477771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-477771 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.169128346s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-477771 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-477771 --alsologtostderr -v=3: (1.208581097s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-477771 -n newest-cni-477771
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-477771 -n newest-cni-477771: exit status 7 (70.191277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-477771 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-477771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-477771 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (12.652752283s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-477771 -n newest-cni-477771
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-679648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-679648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-679648 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-679648 --alsologtostderr -v=3: (11.878745743s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-477771 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-477771 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-477771 -n newest-cni-477771
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-477771 -n newest-cni-477771: exit status 2 (323.782862ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-477771 -n newest-cni-477771
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-477771 -n newest-cni-477771: exit status 2 (341.907353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-477771 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-477771 -n newest-cni-477771
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-477771 -n newest-cni-477771
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648: exit status 7 (77.231947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-679648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-679648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-679648 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.1: (4m25.026317315s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (265.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (44.616731484s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-127768 "pgrep -a kubelet"
I0210 13:24:36.257493   78349 config.go:182] Loaded profile config "auto-127768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-127768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-drkz7" [e7782978-bf94-4379-b973-2602a0acb498] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-drkz7" [e7782978-bf94-4379-b973-2602a0acb498] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.002746741s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-127768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0210 13:25:08.508836   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/functional-644291/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (41.879544906s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qdzl6" [34f85568-97c4-4eb1-8941-3cb7d73f0e61] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003137331s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-qdzl6" [34f85568-97c4-4eb1-8941-3cb7d73f0e61] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004194929s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-233558 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rjww6" [61233f96-458f-4112-9da2-4ef42325f72a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002804643s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-233558 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-233558 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-233558 -n no-preload-233558
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-233558 -n no-preload-233558: exit status 2 (306.869958ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-233558 -n no-preload-233558
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-233558 -n no-preload-233558: exit status 2 (293.871951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-233558 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-233558 -n no-preload-233558
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-233558 -n no-preload-233558
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-rjww6" [61233f96-458f-4112-9da2-4ef42325f72a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003879186s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-755261 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (54.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (54.183726192s)
--- PASS: TestNetworkPlugins/group/calico/Start (54.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-755261 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-755261 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-755261 -n embed-certs-755261
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-755261 -n embed-certs-755261: exit status 2 (321.455806ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-755261 -n embed-certs-755261
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-755261 -n embed-certs-755261: exit status 2 (319.332004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-755261 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-755261 -n embed-certs-755261
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-755261 -n embed-certs-755261
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-t5k2g" [cf4f5aea-cd4d-4d86-b934-d46a61d870be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004284191s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (43.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (43.202143373s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (43.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-127768 "pgrep -a kubelet"
I0210 13:25:52.169399   78349 config.go:182] Loaded profile config "kindnet-127768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-127768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-m7brv" [ea445c93-48a2-415a-a107-2debb01a9b19] Pending
helpers_test.go:344: "netcat-5d86dc444-m7brv" [ea445c93-48a2-415a-a107-2debb01a9b19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-m7brv" [ea445c93-48a2-415a-a107-2debb01a9b19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003614482s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-127768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0210 13:26:26.300873   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/old-k8s-version-567589/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m4.953178403s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-127768 "pgrep -a kubelet"
I0210 13:26:30.028293   78349 config.go:182] Loaded profile config "custom-flannel-127768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-127768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-smnnk" [cc0f45f1-9cfe-4525-b1e7-7eafc17d1040] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0210 13:26:31.422756   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/old-k8s-version-567589/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-smnnk" [cc0f45f1-9cfe-4525-b1e7-7eafc17d1040] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004130375s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dvdtm" [70db165e-e2ec-4bfd-a000-899c56b804ae] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00375621s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-127768 "pgrep -a kubelet"
I0210 13:26:39.014116   78349 config.go:182] Loaded profile config "calico-127768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-127768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-z77lx" [cdbe5a32-95f4-4bba-b2ca-fc1378605c64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-z77lx" [cdbe5a32-95f4-4bba-b2ca-fc1378605c64] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004118191s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-127768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-127768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (39.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0210 13:27:02.148713   78349 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/old-k8s-version-567589/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (39.373516831s)
--- PASS: TestNetworkPlugins/group/flannel/Start (39.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-127768 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (41.392716931s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-127768 "pgrep -a kubelet"
I0210 13:27:29.422846   78349 config.go:182] Loaded profile config "enable-default-cni-127768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-127768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xlstb" [2cf4b3df-6ec1-4844-b838-85773ca28ee9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xlstb" [2cf4b3df-6ec1-4844-b838-85773ca28ee9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003651185s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-127768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nqfbw" [c63e0076-0a17-44d2-8da9-897b4aec0f9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003872334s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-127768 "pgrep -a kubelet"
I0210 13:27:45.042623   78349 config.go:182] Loaded profile config "flannel-127768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-127768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-wm2v6" [e2be1c26-0f6d-440d-8d6b-c427dc814a57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-wm2v6" [e2be1c26-0f6d-440d-8d6b-c427dc814a57] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003171314s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-127768 "pgrep -a kubelet"
I0210 13:27:50.406089   78349 config.go:182] Loaded profile config "bridge-127768": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-127768 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-4lkxd" [97f57b03-ce80-41fd-91f5-d5c964f48604] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-4lkxd" [97f57b03-ce80-41fd-91f5-d5c964f48604] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003644271s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-127768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-127768 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-127768 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4d4k8" [49301615-5918-4c58-8197-c006081055e1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003597034s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-4d4k8" [49301615-5918-4c58-8197-c006081055e1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003728672s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-679648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-679648 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241212-9f82dd49
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-679648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648: exit status 2 (289.60637ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648: exit status 2 (293.434123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-679648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-679648 -n default-k8s-diff-port-679648
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    

Test skip (25/331)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-833381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-833381
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-127768 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-127768" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 13:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-504430
contexts:
- context:
cluster: cert-expiration-504430
extensions:
- extension:
last-update: Mon, 10 Feb 2025 13:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-504430
name: cert-expiration-504430
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-504430
user:
client-certificate: /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/cert-expiration-504430/client.crt
client-key: /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/cert-expiration-504430/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-127768

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127768"

                                                
                                                
----------------------- debugLogs end: kubenet-127768 [took: 3.057001251s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-127768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-127768
--- SKIP: TestNetworkPlugins/group/kubenet (3.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-127768 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-127768" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20390-71607/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 10 Feb 2025 13:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-504430
contexts:
- context:
cluster: cert-expiration-504430
extensions:
- extension:
last-update: Mon, 10 Feb 2025 13:16:55 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-504430
name: cert-expiration-504430
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-504430
user:
client-certificate: /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/cert-expiration-504430/client.crt
client-key: /home/jenkins/minikube-integration/20390-71607/.minikube/profiles/cert-expiration-504430/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-127768

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-127768" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127768"

                                                
                                                
----------------------- debugLogs end: cilium-127768 [took: 3.211643197s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-127768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-127768
--- SKIP: TestNetworkPlugins/group/cilium (3.37s)

                                                
                                    
Copied to clipboard